About six months ago, I posted an article called “AI Is Hitting A Hard Ceiling It Can’t Pass.” In it, I predicted the upcoming stagnation of AI, specifically generative AI. For these AI models to keep improving at the same rate, multiple studies and AI experts have pointed to the fact that the amount of training data, computing power, and energy use would need to increase exponentially. But that simply isn’t possible, and even back in April, OpenAI and other generative AI companies seemed to already be butting up against hard limitations in these areas. However, recent reporting, interviews, and studies have now officially confirmed what I and many others predicted. What this means for the AI industry and the economy at large could be horrific.
These reports come from The Information and Reuters. The Information published an article detailing how their next text AI, codenamed Orion, is only marginally better than their current Chat GPT-4o model despite using a far larger training dataset. The publication stated that “some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks” and that “Orion performs better at language tasks but may not outperform previous models at tasks such as coding.”
According to The Information, Orion hit GPT-4 levels of capability after being trained on just 20% of its training data but barely improved after that. Because AI training techniques have famously reached an impasse in recent years, we can make an educated guess that this means Orion has a training dataset five times larger than GPT-4, yet it is not noticeably better. This perfectly demonstrates and proves the diminishing returns issue.
To hammer this problem home even more, Reuters interviewed the recently ousted OpenAI co-founder, Ilya Sutskever. In the interview, Sutskever claimed that the firm’s recent tests trying to scale up its models suggest that those efforts have plateaued. As far as he is concerned, AI cannot get better by just feeding it more data.
Recent studies also back up Sutskever and explain why Orion is ultimately a bit pants. One of these studies found that as AI models are fed more data and get larger, they don’t get broadly better but get better at specific tasks at the cost of their broader application. You can see this in OpenAI’s o1 model, which is larger than GPT-4o and is better at solving maths problems but is not as good at writing efficiently. You can also see this with Tesla’s FSD. As the software got better at handling more complex traffic problems, it reportedly started to lose basic driving skills and began to curb corners.
Yet another company found that, at their current rate, generative AI companies like OpenAI will run out of high-quality fresh data to develop their AIs on by 2026! As such, making AIs better by simply making these models larger won’t be a viable option in the very near future. Indeed, some have suggested that the reason Orion is struggling is because OpenAI can’t collect enough data to make it any better than GPT-4o.
But, either way, this shows that the predictions of generative AI suddenly stagnating have come true.
There are possible solutions to this, like optimising how AIs are built to reduce the training data needed, running multiple AIs together, or implementing new computing architecture to make AI infrastructure far more efficient. However, all of these solutions are in their infancy and are years away from being deployable. What’s more, these solutions only kick this issue down the line, as all they do is make AI marginally more efficient with its energy and data, and they also don’t solve the issue of where these companies will get more fresh, high-quality data in the future.
So, why does this matter?
Well, big tech has poured billions of dollars into AI on the promise that it will get exponentially better and be wildly profitable in the future. Sadly though, we now know that simply won’t happen.
Take OpenAI. A few months ago, it was predicted to post a $5 billion annual loss and potentially face bankruptcy. Even worse, the AIs that it did release aren’t profitable despite hundreds of millions of users. So even if OpenAI didn’t spend a penny on developing any new models, it would still sink. Yet, even despite this, it was able to raise several billion dollars in credit and a further $6.6 billion in new funding, giving the artificial intelligence company a whopping $157 billion valuation and saving it from implosion. However, at their current rate of loss and development costs, this is only enough to keep them from desolation for another year.
This, combined with the officially confirmed drastically diminishing returns, means that some of the biggest and most influential industries and biggest investment firms in the world are backing a fundamentally broken product. The last time our economy did this, it created one of the worst financial disasters in living memory: the credit crunch of 2008.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Will Lockett, Futurism, Reuters, Futurism, TelecomTV, AI News, The Information, Will Lockett, Will Lockett, Tech Radar