A Godfather Of AI Has Called Out Elon Musk's Bulls**t
And also takes the opportunity to bad-mouth Tesla's self-driving AI.
How do I put this delicately… Musk is very generous with the truth, particularly regarding his AI projects. How many of you remember in 2016 when he claimed: “A Model S and Model X, at this point, can drive autonomously with greater safety than a person.” Or what about in 2019 when he claimed Tesla would have a million robotaxis by the end of 2020? Those claims didn’t age well at all! But, recently, Musk pushed his AI credibility even further and posted on X that AI “will probably be smarter than any human next year” and that “by 2029, AI is probably smarter than all humans combined.” This riled up one of the Godfathers of AI, Yann LeCun, to weigh in. But is superhuman AI really on the horizon?
Yann LeCun is one of the three godfathers of AI. He developed the techniques and technology behind AI in the 1990s and 2000s. Without his work, AI, in any form, wouldn’t exist today. Needless to say, he might be one of the most respected experts in the AI field today.
LeCun replied to Musk’s tweet (xeet?): “No. If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old. But we still don’t have fully autonomous, reliable self-driving, even though we (you) have millions of hours of *labeled* training data.”
Ouch.
His point is very poignant: AI systems are still objectively horrible at learning tasks and are far from as intelligent as we make them out to be. This isn’t the first time LeCun has highlighted this issue. In an interview with the Observer, he stated that current AIs have about as much computing power as a common housecat’s brain but are way less clever, as the AI still can’t understand the physical world, plan complex actions, or have a level of reasoning. As such, according to LeCun, for AI to reach superhuman levels of intelligence, or just human levels of intelligence (also known as Artificial General Intelligence or AGI), requires more than just scaling up the current AI technology. Something new needs to be done to enable this deeper level of thought, reasoning, and planning.
LeCun is far from an outlier in the AI world. A recent meta-analysis surveyed 1700 AI researchers on when they think AGI will happen. The majority think superhuman AI will either never happen or happen in the next century. Another survey found that the overall scientific consensus is that high-level machine intelligence, which is about as smart as a human, has a 50% chance of happening before 2059. But it is incredibly difficult to predict these things; there can be unseen dead-ends, roadblocks, or flawed assumptions in the road ahead that can delay progress for decades, so these predictions could actually be incredibly optimistic. Either way, these thousands of experts don’t exactly agree with Musk’s claims.
But why do these experts think AGI might never happen? Well, it all has to do with energy and data. Let me explain.
Let’s start with energy. A group of researchers looked at what energy demands would be needed to simulate a whole human brain using AI neural networks. Such a simulation, by definition, would be an AGI and have LeCun’s deeper level of thought, reasoning and planning. They found that powering just this one simulation would consume orders of magnitude more power than the entire US currently produces. What’s more, the brain model they used, while the most complete we currently have, is far from complete, so this is actually a massive underestimate! Just to remind you, this isn’t a superhuman AI; it’s only as intelligent as the average human. As such, these researchers summarised that AGI is highly unlikely to ever come to fruition unless we find better, more efficient ways of handling computation.
But, even if we do solve the energy problem, AI also has a data problem. Making AI more advanced requires training it on ever larger datasets. That way, it can better identify and replicate trends in the data, which is really all AI can currently do. This means that AI companies get desperate and skirt the law to gather and use more data. For example, OpenAI is facing several lawsuits for using copyrighted texts, such as books, to train ChatGPT. OpenAI has also scrapped data from many social media websites, and many creators of the videos and posts feel they are owed compensation, as it is their data that has enabled OpenAI’s wildly profitable AI models. Governments are also catching up to this issue with data protection laws like the EU’s GDPR, which can stop AI companies from stealing people’s data without permission or compensation.
An AGI would require datasets orders of magnitude larger than anything OpenAI, or any other AI company, currently has. To do this ethically and above the law could be so damn expensive and take so long that such an AGI would be rendered unviable.
So no, Mr Musk, you won’t unlock superhuman AI by the decade’s end. You can’t even build a self-driving AI that doesn’t run red lights or drive straight into bridges, let alone have basic levels of human intelligence. But the fact is that no one will come anywhere close to AGI for decades to come, at the very least. So why does Musk make these demonstrably false claims? Well, it’s almost like his entire persona and monstrous wealth is based on this technology’s success…
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: India Today, Business Insider, AI Multiple, Lesswrong, The Guardian, The Verge, X.com