Is Tesla's FSD Suddenly Safe?
Seven times safer than a human driver? Really?

Tesla has been notoriously cagey about FSD’s (Full Self-Driving) safety data. It is almost like Elon is trying to hide something. After all, it isn’t like the third-party data, or even the scant data Tesla has previously released, blatantly prove that FSD is wildly dangerous… However, that has now changed, and the critics like myself have been proven wrong. How? Well, Tesla recently published a live website that compares FSD to the national driver average, taking the total distance driven and number of accidents from both, and it shows that FSD is seven times safer than a human driver! Incredible! Miraculous! Surely Tesla will dominate the self-driving era, right? Well, if you dig a little deeper, a very different narrative emerges.
This data takes the total miles and total vehicle incidents from trusted government sources and compares them to Tesla’s own data on FSD. Now, Tesla has been accused of trying to make FSD look better by turning it off just before an accident and classifying said accident as being caused by the human, but they haven’t done that here. Instead, Tesla’s collision attribution method is, “If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged.” Seems promising. Tesla found that the US average is 699,000 miles between major collisions, and the FSD (Supervised) average is 5,100,000 miles, or seven times that distance.
Sounds simple and conclusive. FSD is seven times safer than a human driver. Right?
Sadly not. This suffers from what I call the Human Driver Paradox. Let me ask you a question: when a Tesla driver is using FSD (Supervised), who is the driver, the human or FSD? Because that makes a huge difference in how we analyse and interpret the data. Unfortunately, Tesla has this trick up its sleeve — a “Schrödinger’s driver”. You see, if the trip is successful, then FSD is the driver. But, when a collision occurs, in the eyes of the law, the human has the liability and the driver has the responsibility. FSD is both the driver and not the driver. As such, Tesla’s roundabout claim that FSD is seven times safer than a human driver is at best disingenuous. Let me explain.
If FSD is the driver, then disengagements, when the system shuts itself off or the human overrides the system for safety reasons, should be classified similarly to collisions. After all, if a human wasn’t there to catch FSD, a collision would be more likely. But in this dataset, disengagements aren’t even considered. The benefits of human oversight are being falsely attributed to the FSD system, because the conclusion is comparing FSD’s ability to a human average, not whether a human that uses FSD is safer than a human that doesn’t.
Now, it is true that the website does say FSD (Supervised) is safer than a human without the system in the small print. However, this is not what Tesla fanboys have gleaned from the data, which isn’t surprising as this point is not well signposted. The whole thing is set up to lead you to believe this pretence. But even the statement that FSD (Supervised) is safer than a human without the system is still a little misleading.
Why? Because this data doesn’t account for the fact that customers only use FSD when they feel safe. We know from third-party data (which we will mention soon) and a tsunami of anecdotal evidence that FSD users don’t trust the system at all, particularly in non-highway situations. In fact, Tesla’s own data once showed that FSD customers were using the system just 15% of the time. Naturally, Tesla’s own data is biased, as the users have indirectly cherry-picked it by only using the system when they feel safe to.
This makes the broad statement that FSD (Supervised) is seven times safer than a human driver misleading at best. For one, FSD (Supervised) data is not comparable to the national average, as the national average reflects a proper mix of driving conditions, and FSD does not. For example, the national average will inherently take into account accidents that occur during heavy rain, but because the vast majority of FSD customers don’t feel safe using the system in these conditions, FSD’s performance in this critical condition is significantly under-represented in the data.
So, if you want to say “a driver using FSD is X times safer than a driver without it”, you should really take into account the accidents FSD users experience when they choose to turn the system off, which Tesla hasn’t done. Or, at least, clearly signpost this as a significant caveat to the conclusion, given the vast discrepancy.
Don’t get me wrong, I am happy that customers using FSD are getting into fewer accidents than the national average. That is a giant leap forward for Tesla. Or is it?
The average age of cars on the road in the US is 12.8 years old. That means the majority of the cars Tesla is comparing FSD against don’t have any form of Automatic Emergency Braking (AEB). AEB systems use radar, ultrasonic sensors, and lane-keep assist cameras to automatically brake if the car is about to experience a collision, and FSD (Supervised) has an in-built AEB system running off the car’s cameras. However, since 2022, most new cars in the US have had AEB, and studies have shown that AEB reduces the crash rate by over 50%, with many of these avoided crashes happening in complex urban environments and during compromising conditions.
Again, there is a false equivalence at work here. Really, Tesla’s FSD (Supervised) crash data should be compared to modern cars with AEB, not the national average of older, less safe cars. After all, that is the standard and the new benchmark moving forward. But I’m not sure FSD (Supervised) is any better than a car with AEB. Cars with AEB go twice the distance between crashes as those without, which make up the bulk of the US average data. As such, we can assume that FSD (Supervised) is roughly 3.5 times safer than cars with AEB. But then we need to take into account the bias of the FSD (Supervised) data and how it has been filtered by only including times the customer feels safe using it. That bias could very easily make up that 3.5 times difference, especially as most collisions avoided by AEB are in urban settings, where most crashes happen and where customers aren’t using FSD as frequently.
Again, this is important context missing from the Tesla analysis, because with this context, this data could just be showing the benefits of AEB systems and data bias, not that FSD (Supervised) is actually safer than its competitors.
It is very obvious why Tesla is doing this. Even with their half-arsed signposting of the conclusion, they know this data makes it look like FSD is capable of being a fully-fledged self-driving car, and that will heartily increase Tesla’s value.
But here’s the thing: we do have third-party collected and analysed data on FSD’s performance, and it paints a very different story.
This data comes from teslafsdtracker.com, which uses customer-reported journeys using FSD to calculate the rate of critical disengagement and disengagement rates per mile. They describe disengagement as the driver needing to turn the wheel or use the brakes to override the system, and critical disengagement as the driver needing to do so to avoid a collision. For example, a disengagement might involve taking over steering because FSD (Supervised) wasn’t keeping in lane properly, whereas a critical disengagement might involve taking over because FSD is driving you onto the wrong side of the road.
The average American will drive 1.5 million miles in their lifetime and experience four collisions, averaging one accident every 18 years or once every 375,000 miles. So, if a human using FSD (Supervised) is seven times safer than a human driver without it, as Tesla claims, we should expect to see a distance between critical disengagements of 2.6 million miles. But we don’t.
The vast majority of Teslas are running FSD V12 because their hardware isn’t capable of running the latest version, and they have an average distance between critical disengagements of just 189 miles for the more than 40,000 miles of usage reported. Needless to say, that is laughably lower than the 2.6 million miles we would expect to see if Tesla’s claims were even close to the truth.
But what about more recent Teslas that can run the latest FSD V14? Well, after 18,000 miles of reported usage, V14 averaged a greatly improved 5,886 miles between critical disengagements. However, that is still 60 times worse than the accident rate of human drivers!
And I have reason to believe this data is far too optimistic. For one, the users providing the data have bought a new Tesla and paid for FSD, meaning they have a huge vested interest in this process paying off, so we should expect the data to be biased towards Tesla anyway. But the percentage of journeys with no disengagement and the average distance between disengagements are roughly the same for both the V12 and V14 systems. That doesn’t make sense, considering how much better V14 is meant to be. This suggests that FSD users are driving significantly more cautiously with FSD V14 than they were with V13, disengaging FSD before its error compounds into a critical disengagement, or they are misrepresenting critical disengagements as just disengagements.
But the fact that this data, which is inherently biased towards Tesla because its sources are self-reporting Tesla customers, is in such strong disagreement with Tesla’s own analysis shows just how disingenuous their “seven times safer” argument is.
Let me be very clear: this article shows how hard it is to analyse the safety of self-driving or semi-automated cars. The narrative Tesla shoves down our throats is reductive to the point of being disingenuous. The third-party data, while indicative, is not comprehensive or objective enough to get an accurate picture, as it is highly skewed in Tesla’s direction. My little analysis here should not be taken as the final word — it is just me trying to cobble together the best overall picture I can from the data that is available to us. And yes, my opinion is that FSD is likely not that much safer than modern cars with AEB when used under supervision, is nowhere near good enough for self-driving operations, and Tesla is deliberately using a “seven times better than human” narrative to boost its speculative value. But that is my opinion, and without more conclusive data, I can’t prove it, and Tesla can prove the opposite.
The message you should take from this is that Tesla needs to be far more open with their data. Not just a graph with a flattened narrative and obfuscated methodology, but actually publishing the raw data and raw methodology so that a more comprehensive independent study can be undertaken to verify claims. But until then, the purposeful lack of transparency, lack of context, blatant narrative twisting, and third-party data that demonstrates the total opposite of Tesla’s claims make it look like FSD isn’t as safe as they claim and that they are desperately trying to hide that fact.
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: NATA, Tesla, Will Lockett, Electrek, Tesla FSD Tracker, The Robot Report, Electrek, Motortrend, Anthem, S&P Global, Will Lockett, Science Direct


So far the only time I've been in a Tesla with FSD on (was riding with a friend who had a free month on it), it tried to run a red light. This was at night, totally clear conditions, no other cars around us, stopped at a red and it randomly starts accelerating before the light turns green. Good thing nobody was in the crosswalk.
If fed and robotaxi were so great why doesn’t muskrat use it in the Vegas loop?