The Godfather Of AI Just Called Out The Entire AI Industry
But he missed something huge.

Nobel laureate Geoffrey Hinton, often called the “Godfather of AI” for his enormous contributions to the artificial neural network technology that powers AI, has been on a bit of a tirade against Big Tech recently. From calling out their corporate greed to highlighting the dangers of AI, like Pandora, he has been desperately trying to stuff the fates back into the box. But, in a recent interview with Bloomberg, he turned this up to eleven by calling out AI’s very economic viability. When asked by Bloomberg whether the eye-watering investments in AI will ever pay off, Hinton replied, “I believe that it can’t,” and elaborated, “I believe that to make money you’re going to have to replace human labour.” Now, of course, Hinton, who also believes he has invented a computer god, is focused on the enormous negative impact of AI replacing human labour at scale. It basically turns this multi-trillion-dollar AI bet into a lose-lose situation. After all, if the investment ‘pays off’, the economy will be destroyed, making any kind of investment useless. But what Hinton failed to do was ask, “Can AI replace labour?” Hinton seems unwilling to deface the propaganda propping up his digital Frankenstein’s monster, but fortunately, I have no such qualms. This is why AI can’t replace you, and why that means it is doomed to fail.
If you listen to the hype, AI is definitely going to replace labour soon. For example, research from AI Resume Builder found that 30% of companies plan to replace HR roles with AI in 2026, and the boss of the UK’s Buy It Direct has claimed AI will replace two-thirds of their employees. That sounds pretty scary, right? But AI Resume Builder has a huge vested interest in AI paying off, and this research is not robust at all. Likewise, the company’s scumbag of a boss has been openly using AI as a threat against the UK’s new “living wage” — presumably because he knows that paying people enough money to live will cut into his superyacht budget — and so has decided to lay off his employees for AI if he can’t put his workers into poverty. What a lovely chap…
In the real world of critical thinking, the data paints a totally different picture.
Take the now-infamous MIT report that my readers are probably sick of me referencing, which found 95% of AI pilots didn’t increase a company’s profit or productivity at all. In fact, many companies saw a negative impact. Bear in mind, these pilots aren’t designed to automate workers; they are designed to augment them. If AI can’t even help us do our jobs better, how can we expect it to do that job itself?
And what about the other report my readers are getting tired of? The METR report found that AI coding tools actually significantly slow developers down. It turns out that AI isn’t all that accurate and gets things wrong constantly. These failures have been brilliantly PR-spun into being called “hallucinations” to anthropomorphise the cold plagiarism machine. But when accuracy is important, such as when AI is doing any task of relative importance, but especially when AI is being asked to create code, this is a huge problem. It means the AI writes nonsensical bugs constantly, and because the coder didn’t write the code themselves, it takes them ages to find and correct the errors. As such, any developer with even a little experience will waste more time debugging AI code than they originally saved by getting the AI to write the code. Now, coding is supposed to be one of the main industries where AI will completely replace labour. But again, it can’t even augment workers, let alone automate them.
This problem isn’t just isolated to coding, though. A recent Harvard Business Review survey has found that 40% of workers have had to deal with “workslop” in the past month, which they define as “AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task.” They found that this “workslop” problem, generated by “hallucinations” and AI’s inherent inability to integrate into a work environment — due to it exaggerating an individual’s ignorance, disrupting communications between experts and decision makers, being inaccurate when executing tasks, and creating the task bloat of having to manage the AI — is actually severely impacting overall productivity across many industries. So, again, if AI reduces productivity because of its inaccuracies and inherent structure when being used to augment, how on Earth can it be used to automate jobs?
Keep reading with a 7-day free trial
Subscribe to Will Lockett's Newsletter to keep reading this post and get 7 days of free access to the full post archives.

