AI is revolutionising the world, for better or worse. But what is worrying is that many don’t actually know what AI is. To them, it is this immensely intelligent black box that can seemingly do magic. There are even AI researchers who have claimed that AI is cognitive and sentient. You might think this is harmless, but it isn’t. In the same way, you need a basic understanding of how a car works to drive it safely and not push it past its limitations; we need to have a rough grasp of what AI actually is to be able to use it without it backfiring. So, what even is AI?
Several different types of algorithms are classified as AI, such as fuzzy logic or machine learning. But they all work in conceptually the same way. These algorithms are fed a huge amount of data, and statistical models are used to find trends within that data. One of the most common models is a neural network, in which interlinked virtual nodes react to the data by changing the strength of their connections in a similar, but by no means identical, way to our brains. This is known as “training.” Using these trained statistical models, These algorithms can predict things in similar datasets.
AI has many different applications, such as image recognition, image generation, text generation, audio generation, data analysis, search queries, self-driving cars and more. Traditional programming could have never done these tasks, but AI can do them remarkably well if used correctly.
You see, all AI is, is a statistical model that can identify patterns and extrapolate. Its name is a complete misnomer. It isn’t artificial, and it isn’t intelligent. The program doesn’t “understand” what it is doing. This is fine if you treat it as a statistical model. But as soon as you treat these programs as if they have a modicum of intelligence, they fall apart.
Take self-driving cars. The AI behind self-driving cars doesn’t actually understand the rules of the road. The visual recognition AI doesn’t actually know what a road, car, cyclist or pedestrian is. All it has is trends in its training data. So if it stumbles across a road it wasn’t trained on, a pedestrian wearing clothes it wasn’t trained on, or obstacles it wasn’t trained on, like a flock of pigeons on the road, it struggles. Its training data isn’t appropriate, so it can’t find trends that match, leading to massively incorrect identification and unpredictable dangerous driving. This is why we still see Teslas self-driving themselves into rivers, turning into cyclists, or failing to stop at clearly marked junctions.
Tesla’s self-driving AI has these issues because its application is too broad. No checks are in place to ensure that the training data is appropriate to the situation in which the AI is used. Other self-driving companies like Waymo and Verne have figured this out and, as such, geo-fence their robotaxis to roads on which they are well-trained.
In short, AI can only be used reliably in constrained situations where it is well-trained. However, AI also has some other worrying limitations.
For example, you can’t backwards engineer how an AI came to its conclusions. Even in simple AI, the neural networks are incredibly complex and convoluted. As such, even the engineers who coded an AI couldn’t tell you why it spat out X rather than Y. This is why figuring out why AIs go wrong in even constrained applications can be incredibly hard. For example, numerous image recognition AIs are used to find hard-to-detect cancers in scans at rates better than humans, but they still get it wrong sometimes, and we don’t know why. This makes it nearly impossible to hold AI to account.
This is linked to another issue with AI: hallucinations. Just like humans, AI can find trends that don’t exist, even in huge data sets. However, while humans can self-correct these falsehoods, AI can’t. One of the clearest examples of AI hallucinations is ChatGPT’s false statements. ChatGPT4, despite being trained on a huge amount of data and being used in a very constrained way, still produces work with false claims like Elon Musk is dead. Again, OpenAI has no way of figuring out why ChatGPT does this, so it is tough to correct, hence why the flaw is still there.
This is why even advanced AI shouldn’t be trusted to do even simple tasks unsupervised or without checks and balances in place to mitigate these limitations.
Compare this to human intelligence. We are cognitive and self-aware. We don’t just find trends in data; we build up frameworks and understanding to solve problems. As such, we can learn much faster than AI and cope way better with novel problems by using relevant frameworks. For example, I have never been to the US, but I could almost certainly drive safely on its roads, as I have a cognitive understanding of the rules of the road from over a decade of driving in the UK. What’s more, I could learn the differences of the US’s driving rules from a book, and accurately apply them immediately. AI simply can’t do that.
This framework way of thinking also means we can easily build on our knowledge. We can learn new information while maintaining the knowledge we already have. When AI tries to “learn” new information, it changes the whole neural network, degrading what it has already learnt. This means an AI can’t really be trained for multiple applications, even if they are very similar.
So, what actually is AI? Well, it isn’t sentient, self-aware or cognitive. In fact, as we have covered, it doesn’t actually think at all. It isn’t intelligent. What it is is a useful statistical model that can be incredibly powerful if trained on good enough data and applied correctly to account for its limitations. Sadly, a lot of AIs out there aren’t being treated in this way and are unsupervised or have no checks for correcting hallucinations. So, be careful when someone claims their AI is superhumanly intelligent or can replace an entire industry, as the reality of AI does not back them up.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: University of Oxford, New Scientist, KI-Campus, Bulletin, Forbes
We have had wonderful promises of self driving vehicles; then of the Metaverse; then of small modular nuclear plants; and finally AI. Wipe away the hype, and very little remains. As for AI, LLM sizes have doubled , triple or more, for maybe a 3 to 5 % improvement, at the cost of tens of millions. All these technologies definitely improve human capabilities, but fall very short of the "promises"made