In March of this year, Elon Musk and 1,000 other AI industry leaders signed a letter calling for AI development to be halted until protocols can be implemented to ensure AI doesn’t pose a risk to society. This has led many people to draw parallels to Nick Bostrom’s 2014 book Superintelligence, in which he stipulates that AIs that are more intelligent than humans constitute an existential catastrophe for us. But the actual danger of AI is far from this and far more pressing. You see, AI is not as intelligent as we think it is, and these dumb machines in smart clothes already pose a significant risk to us. But how?
Firstly, let’s set the ground rules for this article. I am discounting any statements made by people with a significant stake in the AI industry. While many of the points they make about AI are very valid, their opinion on the potential of AI is massively swayed. I mean, take Elon Musk; he has literally billions of dollars wrapped up in companies whose worth is entirely based on AI — namely Tesla and X.ai. Of course, he is going to hype up just how powerful they are, and the same goes for every AI CEO or AI startup scientist. Instead, the sources for this article are from AI researchers and well-regarded public scientific figures. While this isn’t a foolproof way of getting a non-biased view, it means we can at least get as close as possible. That being said, this is still an opinion piece, and you are more than welcome to disagree with me.
So, let’s start with what an AI is. AI uses virtual neural networks that work similarly to a biological brain to recognise patterns in data and decide a response to that data. A great example is mammogram screening for breast cancer. The early signs of breast cancer are incredibly hard for doctors to spot, as they look almost identical to health cells. But an AI was trained on apparently healthy mammogram images of patients who went on to develop breast cancer, and it can now identify breast cancer better than a doctor.
Recently, more complex AI systems have emerged, such as self-driving AIs and GPT4. GPT4 is a remarkable writing AI that can pass the infamous Turing test, hold engaging conversations, write people’s homework for them and even write full-blown articles. I have even looked at trying to use GPT4 to help write more articles and cover a more comprehensive range of topics for my readers, though I haven’t yet (more on that later). Meanwhile, self-driving AIs, such as that of Tesla or Waymo, can safely navigate through complex traffic junctions at peak traffic.
This is why many people are saying that AI technology is now so advanced that it is getting better than humans, and we are reaching a tipping point where AI could pose a nuclear-level threat. Even Geoffrey Hinton, who was one of the computer scientists who invented the basis of modern AI technology, quit his job at Google to be able to openly speak about the threats of AI as he “suddenly realised that these things are getting smarter than us.”
But does this stance hold up to the evidence? No, it doesn’t. But his worries are understandable, and we should take heed. Let me explain.
Keep reading with a 7-day free trial
Subscribe to Planet Earth & Beyond to keep reading this post and get 7 days of free access to the full post archives.