
Tesla has been developing self-driving cars for a long time. The first Model Ss with “Autopilot” rolled out of the factory a decade ago. Their more advanced FSD first reached customers over five years ago. Yet, even after all this time and billions of dollars spent, these systems still suck. Third-party data shows that even the latest versions of FSD can only travel 493 miles between critical disengagements. However, the actual figure is likely far worse, as the data also shows that FSD customers distrust the system so much that they only use it 15% of the time! Tesla’s soft launch of its Robotaxi service demonstrated this woeful lack of safety, as within a few days, the vehicles had been spotted egregiously violating traffic laws and driving dangerously multiple times. It feels like Tesla is going nowhere and is just smacking its head against a brick wall. Surely, FSD will work as promised eventually, right? Well, not according to a research paper from OpenAI…
OpenAI does more than fail to replace your job, destroy the internet with its brainrot slop and force our financial institutions into an economy-crushing bubble. Behind all the bullshit hype, they have a dedicated team of top-notch AI scientists doing brilliant research. Interestingly, their latest paper is the pin that could pop the AI bubble.
These scientists were trying to find a way to stop AI “hallucinating”. I hate that term. It anthropomorphises a dead machine by rebranding its errors, which reinforces the mass pareidolia psychosis that makes us all believe this box of probability is even remotely intelligent. For example, METR has found that AI programming tools actually slow down developers, as they make recurrent and strange errors (hallucinations), which means developers have to spend so much time debugging that it would have been faster to just write the code themselves. If AI companies can’t get rid of these kinds of errors, their tools are useless, and their entire business is worthless.
This makes the findings of this paper utterly damning, as they demonstrate that hallucinations are a core element of generative AI technology and can’t be fixed or reduced from their current levels by simply adding more data and computing power to these models (which is OpenAI and the entire AI industry’s current strategy). This really isn’t that surprising. Generative AI is just a probability engine; it isn’t a thinking thing. As such, it will always have a probability of making mistakes. This is why these scientists also found that “reasoning models”, which use a prompt modifier to break your prompt into multiple sections to attempt to get more accurate results from the AI, actually make hallucinations worse! By breaking up a single prompt into multiple prompts, there is just more opportunity for these errors to cock things up.
Those who have been paying attention to the AI world have known this for a while now. We have known about the efficient compute frontier, which explains how AI experiences seriously diminishing returns, for years (read more here).
Okay, so what has this got to do with Tesla’s FSD?
Keep reading with a 7-day free trial
Subscribe to Will Lockett's Newsletter to keep reading this post and get 7 days of free access to the full post archives.

