OpenAI Just Proved Tesla's "Full Self-Driving" Can Never Work
No wonder Musk hates Altman…

Tesla has been developing self-driving cars for a long time. The first Model Ss with “Autopilot” rolled out of the factory a decade ago. Their more advanced FSD first reached customers over five years ago. Yet, even after all this time and billions of dollars spent, these systems still suck. Third-party data shows that even the latest versions of FSD can only travel 493 miles between critical disengagements. However, the actual figure is likely far worse, as the data also shows that FSD customers distrust the system so much that they only use it 15% of the time! Tesla’s soft launch of its Robotaxi service demonstrated this woeful lack of safety, as within a few days, the vehicles had been spotted egregiously violating traffic laws and driving dangerously multiple times. It feels like Tesla is going nowhere and is just smacking its head against a brick wall. Surely, FSD will work as promised eventually, right? Well, not according to a research paper from OpenAI…
OpenAI does more than fail to replace your job, destroy the internet with its brainrot slop and force our financial institutions into an economy-crushing bubble. Behind all the bullshit hype, they have a dedicated team of top-notch AI scientists doing brilliant research. Interestingly, their latest paper is the pin that could pop the AI bubble.
These scientists were trying to find a way to stop AI “hallucinating”. I hate that term. It anthropomorphises a dead machine by rebranding its errors, which reinforces the mass pareidolia psychosis that makes us all believe this box of probability is even remotely intelligent. For example, METR has found that AI programming tools actually slow down developers, as they make recurrent and strange errors (hallucinations), which means developers have to spend so much time debugging that it would have been faster to just write the code themselves. If AI companies can’t get rid of these kinds of errors, their tools are useless, and their entire business is worthless.
This makes the findings of this paper utterly damning, as they demonstrate that hallucinations are a core element of generative AI technology and can’t be fixed or reduced from their current levels by simply adding more data and computing power to these models (which is OpenAI and the entire AI industry’s current strategy). This really isn’t that surprising. Generative AI is just a probability engine; it isn’t a thinking thing. As such, it will always have a probability of making mistakes. This is why these scientists also found that “reasoning models”, which use a prompt modifier to break your prompt into multiple sections to attempt to get more accurate results from the AI, actually make hallucinations worse! By breaking up a single prompt into multiple prompts, there is just more opportunity for these errors to cock things up.
Those who have been paying attention to the AI world have known this for a while now. We have known about the efficient compute frontier, which explains how AI experiences seriously diminishing returns, for years (read more here).
Okay, so what has this got to do with Tesla’s FSD?
Well, you might not realise it, but FSD is in fact comprised of two generative AIs. It takes an input from camera feeds (and only camera feeds) and generates a model of the area around the car using AI computer vision, which it then uses as an input for a self-driving AI to generate control inputs for the car.
As a side note, FSD proves my point about AI “hallucinations” being a terrible PR phrase. When FSD gets things wrong, we don’t call them “hallucinations” because the car crashes or violates traffic laws — which is something we don’t want to anthropomorphise — so we call them errors.
But did you catch that? FSD is just two, totally unsupported generative AI models working together. This entire system is designed around the completely false notion that generative AI can become 100% accurate and error-free. There is nothing to identify and mitigate errors (hallucinations).
Almost every self-driving company knows this. This is why they use multiple sensor types, run several AIs, and give constraints to their AIs to mitigate these kinds of errors. Lidar, radar and ultrasonic sensors are used to verify and correct the computer vision understanding of the world around the car. Separate systems run radar and ultrasonic sensors to detect potential impacts and override the AI to brake and prevent an accident. GPS data and highly detailed 3D maps of the operational area are used not just to help the AI understand what it should do, but also to constrain the possible actions it can take. While these redundant systems are not enough to make a self-driving car as safe as a human driver, they do catch and mitigate almost all AI errors (hallucinations).
Tesla used to do something similar, given that cars before 2022 had radar and ultrasonic sensors. These were absolutely not enough to catch the majority of these errors, but at least it was something. However, Musk forced Tesla to ditch them in favour of a camera-only approach, despite his engineers’ warning him against the move (read more here).
This is why FSD is a dead end. Its entire concept, construction, architecture, ethos, and development have been predicated on the idea that generative AI will soon be nearly 100% reliable. Indeed, Musk has suggested numerous times that all they need is more data to make FSD unbreakably reliable and that the vast amount of data they have collected from Tesla drivers will allow them to reach this goal. This research paper blasts an exploding Starship-shaped hole through that narrative.
What does this mean for the future of Tesla? FSD was supposed to be their future. What does this mean for the credibility of Musk’s leadership? The entire value of Tesla is based on the notion that he knows what he is doing with AI. I trust I do not need to fill in the blanks here.
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: OpenAI, METR, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett


I appreciate substituting “error” for “hallucinations.” And the use of “probability machine.” Let’s further eliminate inappropriate anthropomorphism of AI by not using “understanding” instead of “modeling.” We can only put AI in its rightful place (and deflate the AI bubble) by insisting it is nothing like human intelligence.
Manned spaceflight is another place Musk has decided needs to have its rules changed to accommodate the occasional fatality, which as long as it's not Musk, is OK with Musk. He seems fine redefining safety for YOU.
The boy seems unacquainted with transients, like fog, visual blocks, camera saturation, sensor conflicts, ambiguous visual stimuli, reaction times, and most importantly, failsafe reactions.
People are already DEAD because his cars fail to have exit features when things break. Real people, really dead. Traceable to Tesla design decisions that involve NO motion.
I'm human, with 50+ years of driving experience and each time I do a long interstate trip, SOMETHING comes up that defies reason. Objects, animals, debris, humans, tires in the road are a thing. Newly placed Jersey barriers and highway workers are a thing. Sheets of semi-load tarps are a thing. Temporary lane reassignments are not only a thing, they are a thing with some serious latency on any conceivable map. And of course, the ever popular orange cone is being deployed as one drives by is a thing. Hell, the mere appearance of a Florida, NY, Massachusetts or Connecticut tag alone qualifies as a potential lethal hazard on ANY highway!
Trusting software hobbled by unreasonable architectural constraints is asking for trouble. The highway is not a lab. It's not the simple task of transiting from A to B in perfect light on smooth pavement in excellent weather that is the trick. It's the billion permutations of transients that matter, and at the very least, determining the situation a moving car encounters is primary. That's what WE have to do as humans and WE make mistakes. Sometimes fatal mistakes. Musk seems OK with merely automating fatal mistakes. With a pending trillion dollar pay package in process, the occasional dead citizen struggles to compete for his attentions.
Establishing the situation with negligible error requires sensor variety and fusion. FSD cannot even navigate a single town in Texas. It clearly breaks laws a teenaged student driver avoids.
And remember... the humans on the road with you are sometimes drunk, and even on the interstate, occasionally are going the wrong way with closure speeds of 140 MPH.
Anyone who has ever driven through Boston knows that even the signs can't be trusted and that it is a complete crap shoot to navigate the uncertainties you will CERTAINLY encounter without dying in the process. At least in the city, the collision speeds are slower. Sadly, the million or so unpredictable drivers, pedestrians, and road conditions demand perfection, and single sensor and sensor-type reliance will not provide adequate scene coverage in time frames needed to arrest motion or redirect a guided projectile.