I used to think Mr. “We will have self-driving cars next year” Elon Musk was the king of bullshit. His laughably false, hyperbolic, and manipulative rhetoric has garnered him a place in history next to the inventor of snake oil. But OpenAI CEO Sam Altman seems to be vying for Musk’s crown. Amidst a torrent of damning reports showing how AI is hitting a development wall, condemning studies finding horrific flaws in AI, financial woes, terrified investors, disappointing new products, and mounting lawsuits, Altman is under immense pressure. But you wouldn’t know it if you heard him. He recently stated that AI “superintelligence” could only be “a few thousand days away” and all it would take is “$7 trillion.” He even tweeted that “there is no wall” to building advanced AI models. The claims are, at best, copium, and at worst, manipulative lies designed to keep the gravy train going, enabling his engorged startup to soak up even more investor funds on false promises. Let me explain why.
There is so much evidence piled up against Altman’s batshit claims. So much so that some could, and probably will, write a Bible-sized book about it. Obviously, I can’t cover every point in detail in this article, so instead let’s summarise the prevalent issues facing AI, specifically generative AI, such as the ones OpenAI makes.
Let’s start with the AI development slowdown.
Many scientists and experts have been predicting AI’s diminishing returns for years now. Why? Well, when an AI is trained on data, it finds statistical trends in it and uses those trends to create its output. When you have a relatively small dataset, new data can enable the AI to find lots of new trends. This is because it can easily find similar patterns among the small number of data points. But when you have an AI with a much larger dataset, it struggles to find new trends when new data is added because the trends have to exist in the new data as well as in its much larger data set. Naturally, as an AI gets larger, it takes exponentially more training data to see the same increase in accuracy and performance as when it was smaller.
For a while, this was just speculation. Some computer scientists even argued that AI would have emergent properties that would counteract this. But we now know that simply isn’t true. New studies, as well as new models from OpenAI and its competitors, have shown that AI is beginning to hit a point of seriously diminishing returns. This means that for AI development to continue at the rate we have seen over the past few years, it will take exponentially more data, computing power, energy, and expense.
There are now numerous studies showing these diminishing returns occurring, but a great practical example of this is OpenAI’s upcoming Orion AI. Its training dataset was likely five times larger than its predecessor Chat GPT-4o, yet it showed only margin improvements in comparison, and even then only in specific situations.
Now, there are theoretically ways to “solve” this problem, like more efficient computers that can make larger models cheaper to build or optimised AI architecture that makes them slightly more efficient with their training data. However, these only solve the issue in the short term and likely won’t even enable AIs to continue anywhere near their previous development speed.
So straight away, Altman saying there is no wall and that AI superintelligence is just a few years down the road is a load of equine excrement.
But the issues with AI go so much deeper than this.
New studies have shown that as AI models get larger, they don’t get better in a broad application sense and instead get better at specific tasks. This was seen with OpenAI’s “Strawberry” model, which is better at solving maths problems than its smaller predecessor, GPT-4o, but worse at general language skills. So, we can’t create the all-knowing general-purpose AI chatbots OpenAI, Google, and the like are raising billions upon billions of dollars to build. What’s more, this means that even if we completely solved the diminishing returns issue, these AI companies still couldn’t build the kinds of AI they have raised tens of billions of dollars in investment to create. It shows that their current valuation is a flagrant house of cards, but more on that in a second.
Even if they somehow resolve this issue too, AI companies still couldn’t build their larger next-gen AIs, as they are running out of data. Multiple studies have shown that AI companies scrape the vast majority of their data from online content platforms. However, these platforms are also being flooded with subpar, low-effort content generated by their own AIs. Now, you can’t train an AI on AI-generated content, as it picks up trends that humans don’t make and gives them more and more weight, which, after a few generations of training, can completely destabilise the model to the point it produces utter nonsense. This is known as model collapse. Well, new studies have shown that AI companies will run out of fresh, high-quality human-generated content very soon. One even found this could happen as soon as 2026! This alone puts a hard limit on the rate of AI development.
And that is even assuming these AI companies get to keep their training data. The lion’s share of their data is copyrighted, and the copyright holders are pissed that there are AIs out there being permitted to regurgitate pale imitations of their work. Understandably, there is now a tsunami of lawsuits going after AI companies to stop using their copyrighted work. Copyright lawsuits might be infamously hard to win, but the evidence is damning, there is mounting concern about copyright misuse, and there are several angles these lawsuits take. So it is at least feasible that eventually the law will side with the copyright holders, and AI companies will have to remove huge swathes of their training data, leaving their models crippled.
Not to mention, on top of all of this, generative AI isn’t even profitable!
OpenAI, which has by far the most paying users of any generative AI company, was on course to post a $5 billion loss by the end of the year. This loss was so great that even if they hadn’t been spending heaps of cash on building new AIs, they still would have posted a loss! That’s right, even ChatGPT-4o on its own isn’t profitable. The only reason OpenAI isn’t bankrupt is because Microsoft and other investors pumped $6.6 billion into it, and OpenAI has secured even more billions in credit. However, despite this huge sum of money, it might only stave off bankruptcy for another year at most. Why? Well, OpenAI’s investors didn’t invest on behalf of how good their AIs are now, but instead for how amazing they were promised to be in a few years, so the pressure is raining down on OpenAI to release better and better AIs. But, as we have found out, AI is only getting more expensive to develop and deploy.
It’s no wonder that many Microsoft investors are getting worried that their chosen tech giant is backing such a painfully flawed and unprofitable technology and company.
Sam Altman has been backed in a corner where he has to spout weapons-grade copium every five minutes. If you even take a second to look at the reality of AI and the position OpenAI is in, it becomes glaringly obvious that it is a very expensive road to nowhere. Yet he has somehow managed to wrangle tens of billions of dollars of backing from some of the largest companies and investors in the world. He has to keep the AI hype bullshit going to keep the gravy training filling up his pockets because the second it stops, he and OpenAI sink.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Windows Central, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett