According to the likes of Elon Musk, AI is an existential threat to humanity. They claim these machines will keep getting exponentially better, become practically conscious, and eventually eclipse human intelligence to the point that we become just an easily removable annoyance to them. But could AI really go Skynet on us? Or is this just yet more bullshit from the AI hype train? Well, a new study has taken the time to test this hypothesis and found that it is completely false. So, how did they come to that conclusion? And how does this affect the AI industry?
Let’s start with why some people thought AI could do a Skynet.
All an AI program does is recognise patterns in data and then reproduce those patterns when prompted. It is driven purely by statistics and, as such, doesn’t actually “think” or have any conscious knowledge or framework of what it is actually doing. However, some predicted that this would change as AI was fed more data and started to develop “emergent properties”. Emergence is a complex topic that we don’t have the time to go into today. However, the idea behind this prediction was that once the AI model became big enough, unpredictable parts of the AI software would change, enabling the AI to solve problems that weren’t in its training data. This is akin to the AI switching from replicating patterns to actually understanding what is going on and being able to operate accurately outside its initial parameters.
If such an AI was sufficiently advanced and well-connected, it could enact many of the AI doomsday scenarios predicted by Musk and sci-fi.
However, emergence is a famously hard-to-predict thing. After all, the concept of emergence came from chaos theory. As such, no one knew if AI would demonstrate emergent properties or if said properties would lead to this step towards more conscious understanding. As such, these predictions are more based on a highly dubious “what if” scenario than science.
However, many AI models, particularly LLMs like ChatGPT 4, are now large enough to test them for emergent properties. This is precisely what a recent study led by computer scientists Iryna Gurevych of the Technical University of Darmstadt in Germany and Harish Tayyar Madabushi of the University of Bath did.
They took four LLM AI models with previously identified emergent properties and tested them to see if they actually were. This involved a series of tests and comparing the AI’s results against its intended programming and training data. They found that the AI’s ability to follow instructions, memorisation, and linguistic proficiency, which were all a part of its initial programming and training, could account for these previously identified emergent properties. These AI programs simply acted in complex ways within the boundaries of their code. In short, none of the AIs demonstrated emergence.
This led the authors to conclude that AI is, in fact, far too limited by its programming and incapable of acquiring new skills without instruction. It doesn’t matter how much data you feed into them; they won’t gain any new abilities to understand said data. As such, AI remains within human control, and those AI doomsday predictions simply aren’t possible.
When you think about it, this makes complete sense.
AI is vastly different to intelligence displayed by organisms like ourselves, which is an emergent property. AIs require a vast amount of training data and multiple revisions to solve a problem a chimp can solve in seconds with no prior knowledge. Moreover, the chimp can learn from its mistakes on the fly and modify its problem-solving based on a framework of understanding that can constantly change with new results. Even the most advanced AIs couldn’t hope to achieve such self-ware flexibility.
I genuinely think the only reason people ever gave this theory of AI emergent properties any credence was because of its highly misleading name. Artificial Intelligence is a brilliant bit of branding, but not a good description of the actual technology. It isn’t intelligent. Instead, it is a statistical modelling system that bears no resemblance to any form of actual intelligence we know of. As such, it is lightyears away from something even akin to Skynet. If it was called something else, like node-based analysis and prediction, which accurately describes what the technology does, I doubt this notion of emergence would have ever been taken seriously.
Now, this by no way means AI is 100% safe. It can still be wielded by humans with devastating results. Just look at the AI misinformation Musk has been sharing on X/Twitter. But it heavily implies that the concept of rouge AI can be assigned to fiction, and the hyperbolic ramblings of AI CEOs can be seen for what they are, PR stunts.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Science Alert, ACLA, Will Lockett, Will Lockett, Will Lockett, Will Lockett