
Hushed whispers murmured in the internet’s darkest corners have now spread to the broader media. Internet sleuths digging through teasers from OpenAI believe that the AI giant has finally unlocked Artificial General Intelligence (AGI); that is, an AI with equal or greater cognitive abilities to a human. If true, the consequences would be shocking. OpenAI could render any job obsolete; mass lay-offs would occur at a rapid pace, our economy would tank, and the basic fundamental pillars of our society would be ripped out from the base. It wouldn’t be the end of the world, but it would be the end of the world as we know it. So, are these rumours true?
Well, no. Absolutely not. And to think otherwise would be an act of willful ignorance.
There are several significant reasons why AGI is currently only viable in the realm of sci-fi.
Firstly, the AI technology we currently have physically can’t compute or operate in this manner. In fact, Iris van Rooij, a Professor of Computational Cognitive Science at Radboud University, has recently published a paper detailing this issue. They found that even if AI engineers have unlimited computation power and data, they could never produce an AGI. This is because the fundamental programming structures underlying AI, such as neural networks (a terrible and misleading name) and transformers (the “T” in “ChatGPT”), are just statistical models capable of replicating basic patterns. No matter how much we scale them up, they won’t magically gain the ability to cognitively think, understand the world, or create novel thought.
This is why all the serious AI scientists are saying we need a totally new approach and innovative technologies light-years ahead of what we currently have to create anything close to a functional AGI.
And, even if this wasn’t the case, and we could achieve AGI with current technology, it would make zero sense to actually build it.
You see, AI has this thing called the “Efficient Compute Frontier.” This is a very fancy way of saying that AI training has diminishing returns. I have covered this topic several times before, so if you want a more detailed description of this issue, go here. But in short, AI has to be “trained” on data (a name that just anthropomorphises the program processing said data) to “learn” (again, a name that anthropomorphises replicating patterns in said data). So, if you want to improve your AI, you need to train it on more data, which requires more computing power. This is why OpenAI and the other generative AI giants are spending hundreds of billions of dollars building massive data centres, as this gives them the huge amount of computing power needed to train their AI.
However, the relationship between the amount of training data and performance isn’t linear. There are diminishing returns. So, if you double the training data of a small AI, you might see a performance increase of 5%, but if you then double the training data again, you might only see a performance increase of 1%, and so on.
This wouldn’t be a problem if training AI wasn’t incredibly expensive in terms of money, energy, and infrastructure. As such, there is an exponential cost increase to improve AI. We are already seeing this limit in action, as OpenAI’s latest models are barely an improvement on their previous models, despite having significantly larger training datasets and far more spent on them.
As a result, even if our current technology could feasibly produce an AGI (which, thanks to the likes of Professor Rooij, we know it can’t), creating such an AGI would be prohibitively expensive, if not flat-out impossible, with the resources humanity has.
The science is crystal clear: we are still light-years away from developing an AGI.
So, it was no surprise when Sam Altman, the CEO of OpenAI, had to come out and squash these AGI rumours. He posted on Musk’s horrifically crap platform X, stating, “We are not gonna deploy AGI next month, nor have we built it.” Then added that fans should cut their expectations by “100x.”
It’s telling that the AI world is obsessed with hitting these far-off, hypothetical and fantastical milestones and not fixing the issues with current AI. Current generative AI is actually capable of doing a lot! But it isn’t creating any gains in productivity. Why? Well, it is simply not consistent enough, and as such, a trained professional needs to supervise it, which costs just as much as hiring the professional to do the job in the first place (read more here). This has become such a prevalent problem that banks and tech giants that have pumped billions of dollars into AI are now deeply worried about their investments.
If the AI world wants to be taken seriously, this is the discussion they need to be having, and not about AGI sci-fi nonsense. But real-world problem-solving isn’t what pays the bills in the AI world. It’s all driven by hype and market manipulation to boost the apparent value of an AI company, despite the results being meagre in comparison (read more).
So, be careful out there, as we should expect to see far more of this over-exaggerated AI bullshit in the future, especially as the President of the USA, his “Roman Saluting” best bud, and their billionaire fanboys all made their vast wealth through similar tactics, and plan to do it all over again with this fresh AI nonsense.
Yeah, the next four years will be… interesting.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Fortune, The Times, Yogesh Pusarla, Radboud University, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett