Will Lockett's Newsletter

Will Lockett's Newsletter

Is The AI Bubble About To Burst?

It isn't an easy question to answer.

Will Lockett's avatar
Will Lockett
Sep 06, 2025
∙ Paid
18
1
5
Share
Photo by Jr Korpa on Unsplash

If you need any proof that the AI hype is getting a little out of control, just look at Musk. He has claimed that FSD will soon make car ownership obsolete, despite the fact that after a decade of development, it still drives like a myopic 12-year-old. He even recently claimed that Tesla’s AI robot Optimus, which so far has only shuffled around on completely flat surfaces like it has shat itself and been puppeteered like a ’90s Disney animatronic, will soon make up 80% of Tesla’s revenue. And, somehow, despite these brain-rotten, wildly unrealistic and idiotic claims, analysts and investors aren’t calling Musk a clown and are instead pumping money into his quaker schemes. So it is no wonder the idea that the AI bubble is about to burst has been floating around the media over the past week. However, some key elements of this conversation have been overlooked.

For one, we know that AI is a severely limited technology, yet the industry is pretending like it isn’t.

For example, we have known about the efficient compute frontier for years now. This boundary states that the maths behind AI is reaching a point of diminishing returns and that generative models are currently as good as they will get, even if the models are made exponentially larger (read more here). We can already see this with ChatGPT-4 and 5, which, despite OpenAI significantly increasing the model size and training time, have only delivered very minor improvements.

Then there is the Floridi Conjecture, which again explains that the maths that powers AI means that AI systems can either have great scope but no certainty or a constrained scope and great certainty. Crucially, Floridi’s Conjecture states that an AI absolutely can’t have both a great scope and great certainty (read more here). This means that AI models, which are treated as general-purpose intelligent systems, like LLMs or Tesla’s FSD, can never be reliable, as their scope is far, far too large. But in more constrained applications, in which the system isn’t treated as intelligent, it can be made dependable and reliable.

This inability to be even remotely accurate in a broad application is reflected in real-world applications.

An MIT report found that 95% of AI pilots didn’t increase a company’s profit or productivity. For the 5% in which it did, the AI was relegated to back-room, highly constrained admin jobs, and even then, there were only marginal improvements.

A METR report found that AI coding tools actually slow developers down. The inaccuracy of these models means they repeatedly make very bizarre coding bugs that are highly arduous to find and correct. As such, it is quicker and cheaper to get a developer to code it themselves.

Research has even found that for 77% of workers, AI has increased their workload and not their productivity.

Then there is the issue of treating AI as intelligent, even in slightly constrained tasks. Take Amazon, which used AI to power its checkout-less stores before switching to remote workers, given that the AI was getting it wrong so frequently and costing them so much money that it was unsustainable (read more here). Even in very constrained tasks like this, AI’s constant errors are more costly than the savings they deliver.

The real-world data and our understanding of this technology have painted a very clear picture. AI is a severely limited technology, which cannot improve much past its current form and only has a positive impact in a few niche and mostly boring applications.

Keep reading with a 7-day free trial

Subscribe to Will Lockett's Newsletter to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Will Lockett
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture