Self-Driving Cars Are Way More Dangerous Than You Think.
Why we are miles away from true automotive automation.
Self-driving cars have been a sci-fi trope for decades. But, this pie-in-the-sky technology has become tangible over the past few years. Multiple companies are now running small-scale robotaxi operations, and Tesla has repeatedly claimed that its Full Self Driving (FSD) is safer than human drivers. At face value, we are a hop and a skip away from never having to touch the steering wheel again. But all is not what it seems. You see, when you actually look at the data, these “self-driving” cars are so unsafe it is surprising they are allowed on the street. Let me explain.
Let’s start with the big boy in the room. Tesla. Musk has been known to make BS overexaggerated claims about their self-driving capabilities. Back in 2016, he claimed that Teslas would be able to self-drive across the country “by next year” and that owners would also be able to summon their cars from “anywhere connected by land & not blocked by borders, e.g. you’re in LA and the car is in NY.” Well, it’s 7 years later, and Teslas still can’t do that. But, recently, Tesla claimed that FSD has a crash rate one-fifth that of humans, and my Elon BS radar started to ping.
You see, this simply can’t be true.
Last year, Musk also announced that FSD has driven 150 million miles, which means each of the 400,000 cars with the technology fitted drove an average of 375 miles each while using FSD. But thanks to the National Highway Traffic Safety Administration (NHTSA), we also know that from 2021 to the end of 2023, Teslas had 736 crashes, 17 of them fatal, while using automation, the vast majority of which happened in late 2022–2023. It’s safe to assume that most of these happened with FSD rather than the less capable autopilot system, as the option was heavily pushed during this time of increased crashes. But that implies that FSD has a fatal accident rate of 11.3 deaths per 100 million miles travelled. For comparison, the fatal accident rate for us humans driving in 2022 was 1.35 deaths per 100 million miles travelled.
In other words, it seems FSD is around 10 times less safe than human drivers! What’s more, FSD isn’t a fully self-driving feature, despite what it is called, and needs to be supervised by a human driver at all times to step in when FSD gets things wrong. This means that the fatal crash rate could be exponentially higher if used as an actual self-driving car! What’s more, human drivers are still having to concentrate on the road and make decisions about when to intervene with FSD. In other words, they are still driving. So, there is an argument that using FSD makes us 10 times worse at driving.
Now, you could argue that not all of those accidents used FSD, but to meet Musk’s claims that FSD is 5 times safer than a human, only 2% can be from FSD. If that were the case, then it suggests Autopilot is massively unsafe, and Tesla should stop fitting cars with it immediately, especially when FSD is so much safer! But, roughly 19% of Tesla customers opt to buy FSD in the US. If we assume these crashes are proportionate to this ratio (19% with FSD, 81% Autopilot), then FSD still comes out with a fatal accident rate over twice that of a human driver!
So, while it might be a slight stretch to say FSD is ten times less safe than a human, we can safely assume it is significantly less safe than a human driver.
FSD customers actually back this data up. A recent analysis of Tesla’s public data showed that FSD customers only use the system 15% of the time they drive, which is why the miles covered per car was only 375. The question has to be asked: Why would someone spend over $10,000 on a self-driving feature that is supposedly 5 times safer than their driving and only use it 15% of the time? Simple. They have had to step in so many times to correct FSD’s dangerous driving that they don’t trust the system, and it is simply easier and safer to drive themselves.
So, how did Musk come up with his figures? Many analysts have pointed out that Tesla’s figures have been fudged and displayed in a very misleading way. By changing parameters and displaying data in deceptive ways, Tesla is able to make these obviously untrue statements about FSD.
But it isn’t just Tesla doing this.
Waymo, Google’s experimental self-driving robotaxi service, recently analysed 7.13 million of their fully driverless miles and compared it to human driving benchmarks to see how safe their system is. Surprisingly, they found that their driverless cars were 6.7 times less likely than human drivers to be involved in a crash resulting in an injury. What’s interesting here is that Waymo has created its own human driving benchmarks against which to compare its system. It’s also interesting that the paper they published about this analysis didn’t mention remote workers intervening to keep the car driving safely. We know Waymo does this, as they have publicly reported 21 “disengagements” in 2020. However, Waymo hasn’t publicly admitted how often these remote workers intervene. Waymo’s rival, Cruise, who is no less advanced, has admitted that remote workers intervene every 4–5 miles to keep their cars driving safely.
So, it seems Waymo is using the same tricks as Tesla, and their “6.7 times less likely than human drivers to be involved in a crash resulting in an injury” could be an objectively false claim. Those human benchmarks could be easily moved to make Waymo look better. Particularly as Waymo could only travel at a maximum speed of 35 mph at the time of this analysis. This means their average speed could be significantly lower than the human benchmark, making the chances of injury from a crash significantly lower. They could also be covering up how often remote workers have to intervene, making the system artificially appear safer.
As such, I doubt Waymo’s safety claims are that reliable.
So, what do we do about this problem? Well, funnily enough, Waymo has proposed a brilliant solution, a standardised and thorough test to assess the safety of self-driving cars against human drivers. Firstly, it’s interesting that they have stated this, as it definitely feels like a get-out-of-jail clause for the safety analysis they have put out there. Secondly, such a test could ensure that the public understands how potentially dangerous self-driving cars are. The question is, what would such a test look like? How do we ensure it doesn’t have loopholes, isn’t biased, or is genuinely reflective of the truth? I’m not the one to answer these questions. But I think it’s time we called for such a test to clear these murky waters.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: TAP, The Verge, CNBC, Inside EVs, Forbes, Dan O’Dowd, Electrek, Y Combinator, ARXIV