Is Tesla's FSD Suddenly Safe?
Seven times safer than a human driver? Really?

Tesla has been notoriously cagey about FSD’s (Full Self-Driving) safety data. It is almost like Elon is trying to hide something. After all, it isn’t like the third-party data, or even the scant data Tesla has previously released, blatantly prove that FSD is wildly dangerous… However, that has now changed, and the critics like myself have been proven wrong. How? Well, Tesla recently published a live website that compares FSD to the national driver average, taking the total distance driven and number of accidents from both, and it shows that FSD is seven times safer than a human driver! Incredible! Miraculous! Surely Tesla will dominate the self-driving era, right? Well, if you dig a little deeper, a very different narrative emerges.
This data takes the total miles and total vehicle incidents from trusted government sources and compares them to Tesla’s own data on FSD. Now, Tesla has been accused of trying to make FSD look better by turning it off just before an accident and classifying said accident as being caused by the human, but they haven’t done that here. Instead, Tesla’s collision attribution method is, “If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged.” Seems promising. Tesla found that the US average is 699,000 miles between major collisions, and the FSD (Supervised) average is 5,100,000 miles, or seven times that distance.
Sounds simple and conclusive. FSD is seven times safer than a human driver. Right?
Sadly not. This suffers from what I call the Human Driver Paradox. Let me ask you a question: when a Tesla driver is using FSD (Supervised), who is the driver, the human or FSD? Because that makes a huge difference in how we analyse and interpret the data. Unfortunately, Tesla has this trick up its sleeve — a “Schrödinger’s driver”. You see, if the trip is successful, then FSD is the driver. But, when a collision occurs, in the eyes of the law, the human has the liability and the driver has the responsibility. FSD is both the driver and not the driver. As such, Tesla’s roundabout claim that FSD is seven times safer than a human driver is at best disingenuous. Let me explain.
If FSD is the driver, then disengagements, when the system shuts itself off or the human overrides the system for safety reasons, should be classified similarly to collisions. After all, if a human wasn’t there to catch FSD, a collision would be more likely. But in this dataset, disengagements aren’t even considered. The benefits of human oversight are being falsely attributed to the FSD system, because the conclusion is comparing FSD’s ability to a human average, not whether a human that uses FSD is safer than a human that doesn’t.
Now, it is true that the website does say FSD (Supervised) is safer than a human without the system in the small print. However, this is not what Tesla fanboys have gleaned from the data, which isn’t surprising as this point is not well signposted. The whole thing is set up to lead you to believe this pretence. But even the statement that FSD (Supervised) is safer than a human without the system is still a little misleading.
Why? Because this data doesn’t account for the fact that customers only use FSD when they feel safe. We know from third-party data (which we will mention soon) and a tsunami of anecdotal evidence that FSD users don’t trust the system at all, particularly in non-highway situations. In fact, Tesla’s own data once showed that FSD customers were using the system just 15% of the time. Naturally, Tesla’s own data is biased, as the users have indirectly cherry-picked it by only using the system when they feel safe to.
This makes the broad statement that FSD (Supervised) is seven times safer than a human driver misleading at best. For one, FSD (Supervised) data is not comparable to the national average, as the national average reflects a proper mix of driving conditions, and FSD does not. For example, the national average will inherently take into account accidents that occur during heavy rain, but because the vast majority of FSD customers don’t feel safe using the system in these conditions, FSD’s performance in this critical condition is significantly under-represented in the data.
So, if you want to say “a driver using FSD is X times safer than a driver without it”, you should really take into account the accidents FSD users experience when they choose to turn the system off, which Tesla hasn’t done. Or, at least, clearly signpost this as a significant caveat to the conclusion, given the vast discrepancy.
Don’t get me wrong, I am happy that customers using FSD are getting into fewer accidents than the national average. That is a giant leap forward for Tesla. Or is it?
Keep reading with a 7-day free trial
Subscribe to Will Lockett's Newsletter to keep reading this post and get 7 days of free access to the full post archives.

