If you need any proof that the AI hype is getting a little out of control, just look at Musk. He has claimed that FSD will soon make car ownership obsolete, despite the fact that after a decade of development, it still drives like a myopic 12-year-old. He even recently claimed that Tesla’s AI robot Optimus, which so far has only shuffled around on completely flat surfaces like it has shat itself and been puppeteered like a ’90s Disney animatronic, will soon make up 80% of Tesla’s revenue. And, somehow, despite these brain-rotten, wildly unrealistic and idiotic claims, analysts and investors aren’t calling Musk a clown and are instead pumping money into his quaker schemes. So it is no wonder the idea that the AI bubble is about to burst has been floating around the media over the past week. However, some key elements of this conversation have been overlooked.
For one, we know that AI is a severely limited technology, yet the industry is pretending like it isn’t.
For example, we have known about the efficient compute frontier for years now. This boundary states that the maths behind AI is reaching a point of diminishing returns and that generative models are currently as good as they will get, even if the models are made exponentially larger (read more here). We can already see this with ChatGPT-4 and 5, which, despite OpenAI significantly increasing the model size and training time, have only delivered very minor improvements.
Then there is the Floridi Conjecture, which again explains that the maths that powers AI means that AI systems can either have great scope but no certainty or a constrained scope and great certainty. Crucially, Floridi’s Conjecture states that an AI absolutely can’t have both a great scope and great certainty (read more here). This means that AI models, which are treated as general-purpose intelligent systems, like LLMs or Tesla’s FSD, can never be reliable, as their scope is far, far too large. But in more constrained applications, in which the system isn’t treated as intelligent, it can be made dependable and reliable.
This inability to be even remotely accurate in a broad application is reflected in real-world applications.
An MIT report found that 95% of AI pilots didn’t increase a company’s profit or productivity. For the 5% in which it did, the AI was relegated to back-room, highly constrained admin jobs, and even then, there were only marginal improvements.
A METR report found that AI coding tools actually slow developers down. The inaccuracy of these models means they repeatedly make very bizarre coding bugs that are highly arduous to find and correct. As such, it is quicker and cheaper to get a developer to code it themselves.
Research has even found that for 77% of workers, AI has increased their workload and not their productivity.
Then there is the issue of treating AI as intelligent, even in slightly constrained tasks. Take Amazon, which used AI to power its checkout-less stores before switching to remote workers, given that the AI was getting it wrong so frequently and costing them so much money that it was unsustainable (read more here). Even in very constrained tasks like this, AI’s constant errors are more costly than the savings they deliver.
The real-world data and our understanding of this technology have painted a very clear picture. AI is a severely limited technology, which cannot improve much past its current form and only has a positive impact in a few niche and mostly boring applications.
Contrast that to the claims of AI giants like OpenAI, which promises that AI superintelligence is just around the corner and that AI will disrupt the entire economy, as it will displace hundreds of millions of jobs. This is the promise that has lured hundreds of billions of investor dollars towards generative AI start-ups. But it could not be further from the truth.
This is a substantial issue, as generative AI companies are so far from profitability that it is painful.
Take OpenAI. Despite having by far the largest income of any AI company by a country mile, they are still losing a dramatic amount of money for every one of their $200-a-month plans. Their models are just too expensive to build and run. In fact, even if they built much more efficient models, like those from DeepSeek, they would likely still lose money at that price point. Naturally, analysts have found that OpenAI is set to post a loss of over $14 billion in 2026, which is a larger loss than many of the banks that folded in 2008. OpenAI’s own numbers suggest that by 2029, they will post losses in the hundreds of billions of dollars (read more here).
Even datacentres, the infrastructure behind AI, are wildly unprofitable. Praetorian Capital CIO Harris Kupperman recently revealed that the AI data centres being built today will suffer $40 billion of annual depreciation while generating somewhere between $15 and $20 billion of revenue. In other words, the infrastructure itself is a money pit. But there are far more additional costs at play, from energy and water to data acquisition and preparation and even AI testing. Kupperman found that if you take these costs into consideration, generative AIs need to increase their revenue by ten times just to break even.
That $200-a-month OpenAI plan? OpenAI needs to sell it for $2,000 a month to break even. That is backed up by a leaked memo from OpenAI about a year ago that suggested they were thinking of raising prices to that exact level (read more here).
So, generative AI isn’t going to get much better than it currently is, is only useful in a select few applications, and has no viable route to profitability. Does anyone fancy investing?
With Meta’s dramatic restructuring of their AI division and the monumental disappointment of ChatGPT-5, along with the revelation of insights like those above, the market is beginning to realise this and is slowly getting cold feet.
This is why talk of the AI bubble bursting has been everywhere.
Particularly because it bears a striking resemblance to the dot-com bubble of the ’90s. Web 1.0 and 2.0 kickstarted the internet revolution, and investors jumped on the opportunity. Any business that operated on the internet had huge amounts of venture capital poured into it, even if it was functionally useless and had no route to profitability. And over the course of just a few years, the value of internet companies soared far beyond reality, creating a bubble. The AI bubble is exactly the same.
Where it differs is where the money comes from.
In 2000, the US economy was at risk of overheating and suffering from rampant inflation, so interest rates were preventatively hiked, making debt expensive. It was mostly US investors who grew the dot-com bubble, and many of these investors used debt to invest in these unprofitable companies. So, they suddenly wanted to see a profit to service their debt, and when it became apparent that couldn’t happen, they sold. This widespread sell-off popped the bubble, crushing most of the internet companies and impacting the entire economy.
But you’ll notice we are also experiencing inflation and spiking interest rates. This AI bubble should have popped already. Why hasn’t it?
I think it is partly because of the less tangible reality of AI than internet businesses. The attraction of AI is the myth of what it could eventually be. Meanwhile, internet businesses were valued based on the convenience for customers. It is harder to pop something that is purely speculative, so the bubble can grow larger.
But there is also the fact that the dot-com bubble grew with American money, whereas the AI bubble is funded by oil money. Most big AI companies, like OpenAI; most big venture capital firms, like Softbank; and most big tech companies, like Microsoft, are largely funded by the Saudi sovereign wealth fund, which invests the country’s oil profits to the tune of hundreds of billions of dollars. Then, these companies invest in each other’s AI pushes as part of their own boardroom circle jerk, which raises their value — for example, Microsoft and Softbank have both invested in OpenAI. This means that interest rates don’t affect the flow of money into AI the same way they did with the dot-com bubble.
As a side note, this is why AI’s horrific energy usage isn’t a bug but a feature for the industry. The investors are using it to scupper net zero and keep the world hooked on oil by dramatically increasing our energy needs.
This means the AI bubble will likely grow larger before it pops.
Indeed, some analysts have found that if AI demand stays the same, the pop can be staved off. Likewise, many Western investors know that a huge sell-off will leave them short-changed, particularly as they are not the majority of investors, like they were in the dot-com bubble.
However, there is also a significant chance that the glitz of the AI hype train will wear off, demand will begin to decline, and its associated value will dwindle as more and more AI pilots fail, new research finds that AI tools are actually counterproductive, and prices for AI inevitably start to rise. Even if Saudi oil money continues to flood the space, this process will be enough to trigger a sell-off, popping the bubble.
Really, the question isn’t if the bubble will pop, but when.
And the issue is, the longer it takes to pop, the larger the bubble will become, and the more our financial institutions, pensions and economy will become intertwined with it, making it even more damaging when it does eventually pop.
So, is the AI bubble about to pop? I fucking hope so!
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: The Independent, BI, Futurism, Will Lockett, Will Lockett, WSJ, Sky, Futurism, Motley Fool, METR, The Economist, Csiro, Investopedia, Reuters, Will Lockett, Unleash, Will Lockett, Will Lockett
I have Human Intelligence. AI can’t do a fraction of the things I do when I get up in the morning, and that’s not counting peeing. But like MAGA, AI can be woefully wrong with no selfawareness. In a way, AI is the perfect metaphor for our techlords, including Musk.
Hey -- as a Middle East focused tech guy, can I humbly ask that we try to reduce the ignorance about the region when talking about the tech sector (or actually, anything)? You'll have to take my word about my credibility (I'm a commenter named after a deadly bacterium, after all), but literally nothing you say about the Saudi involvement in this situation is correct. Softbank is a lot of things, Vision Fund I is a lot of things -- none of these are traditional VC setups. The Saudis invest in some things, as do Emirati and others in the region, but things keep getting consolidated into "Saudi investment," incorporating public (eg the PIF), private (Whalid bin Talal, etc) and non-Saudi. And the overall goal of the PIF is most definitely not to fund the AI boom or keep it going, nor is it to sell more petroleum. This is the exact opposite of the objectives of the SWF, which has not actually made significant investments (the AI company, HUMAIN, is in the very early stages of evolution, and the Abu Dhabi funds are way ahead in terms of magnitude of investment and broad participation in AI). Apologies for the rant, it drives me crazy when people assert things about my area of specialization that are 100% incorrect. (This is also true for mistakes about water-borne GI infection, FWIW, but those are somewhat more rare).