Musk’s facade is crumbling before us. Just a few years ago, he was the golden boy of capitalism, a visionary engineer, a revolutionary business leader and a saviour of the environment. He had built the perfect mythology around him and was reaping the rewards. But all of that has now gone. The public’s perception of Musk has shifted to that of an anti-democratic, free-speech hypocrite, misinformation-spouting, racist, misogynistic, union-busting, bullshit-spewing, midlife crisis-riddled, egotistical, petty, damagingly profiteering, idiotic businessman that only got to where he is through aggressive takeovers and manipulation. But who also, crucially, doesn’t seem to understand the technology his companies are based on at all. This fall from grace is worth studying, but is far beyond the scope of this article. Instead, I want to focus on Musk’s approach to one specific technology, AI, as it epitomises Musk’s complete lack of understanding and awareness. You see, from Tesla’s FSD to X’s new Grok AI image generator, Musk seems to not only fail to understand what AI actually is but also fail to recognise its glaring limitations.
Let’s start with said Grok AI image generator.
Grok, the AI text generator, is already in quite a bit of trouble. It has no limits on its output, making it prime for spewing misinformation or straight-up damaging information. For example, a journalist recently found he could easily get Grok to confirm Elon Musk was a paedophile… This is a brilliant example of how damaging AI “hallucinations” can be, as there is no evidence Elon is, in fact, a paedophile. But the image generator version of Grok has taken this issue to the next level.
It, too, has no software limitations on what it will produce. As such, it readily produces copyrighted material and photorealistic images of celebrities and public figures in defaming situations. For example, it has already produced images of Mickey Mouse holding guns around a sea of corpses and Pikachu firing Uzis.
All other AI image generators have blocks in place to stop such images from being created. Still, Musk has removed them from Grok to apparently avoid political inference and to ensure his AI isn’t “woke” (whatever that means). Not only does this open up Grok (and X) to massive lawsuits, but it’s also massively hypocritical.
Musk has warned the world for years how dangerous unregulated AI can be. One of the most pressing dangers is AI being used to flood the internet with copious amounts of damaging misinformation, as this could do irrevocable harm to democracy and our free-market economy. So, in creating his anti-woke AI, he has created the very thing he warned us all about. Hypocrite!
Secondly, there is a legal reason other AI models have these blocks. It protects them from defamation lawsuits from copyright holders and lawsuits from those damaged by deleterious images/text these models will inevitably generate. It also keeps their argument of “fair use” intact for the hundreds of gigabytes of copyrighted videos used to train these AI models, as with these blocks in place, these models can’t directly reproduce copyrighted material, which is an infringement of fair use. In other words, Grok could open X to lawsuits from the likes of Disney, Kamal Harris, or those hurt by political riots riled up by Groks fake output.
But Musk’s other AI projects are also doing equally bad.
Take Tesla’s Full-Self Driving (FSD). Despite the misleading name, this AI driver-assist system is far from fully autonomous, and in fact, there is plenty of evidence it’s wildly unsafe. Why? Well, Musk gutted the system to increase profits and effectively sabotaged his own AI.
You see, no AI can be 100% accurate, no matter how much training data you shove into it. Moreover, the environmental sensors these systems use aren’t 100% reliable either and will produce anomalous readings. This is especially true for computer vision sensors, which use camera feeds and AI to understand the world around them, as both the camera and AI have compounding inaccuracies. This is why self-driving cars have vast suites of multiple types of sensors like Lidar, 4D radar, ultrasonic sensors and cameras, and many actually run multiple self-driving AI programs using these different sensors. This enables them to check and mitigate errors in the sensors and AI by comparing data feeds and multiple AI outputs.
Take the recently announced Zeekr 007. It has two AI computers and 33 sensors, including Lidar, radar, ultrasonic sensors, and cameras, and it is set to achieve door-to-door automation sometime next year!
Okay, so what about Tesla’s FSD? Well, back in 2021, Musk went against his own engineer’s advice and stripped new Teslas of every sensor other than cameras, giving FSD only 8 camera sensors to understand the world around it. Nearly a dozen former Tesla employees, test drivers, safety officials, and other experts all reported an increase in crashes, near-misses, and other embarrassing mistakes by Tesla vehicles after Musk made this switch. In fact, a former test operator went on record saying that the company is “nowhere close” to having a finished product.
Moreover, since 2021, FSD development has slowed to a crawl. Why? Well, Musk thought that he could save production costs and increase Tesla profits by stripping FSD of these sensors, and make up for it by training the AI on even more data. Theoretically, this would make the AI more accurate and negate the need for these checks and balances. However, Musk failed to comprehend that we are reaching a point of diminishing returns with AI training (read more here). As such, he would need a stupidly vast and utterly unfeasible amount of data, computing power and energy to create such an accurate AI. This is why companies like Zeekr, Mercedes, and Waymo have overtaken Tesla in the self-driving game and have developed far more capable systems despite having less development time, expenditure, and available training data.
But this really shouldn’t be surprising. Back in 2021, Musk admitted he thought cracking self-driving was harder than he thought in a tweet (Xeet?): “Haha, FSD 9 beta is shipping soon, I swear! Generalised self-driving is a hard problem, as it requires solving a large part of real-world AI. I didn’t expect it to be so hard, but the difficulty is obvious in retrospect. Nothing has more degrees of freedom than reality.” I mean, no doy. There are interviews with the likes of James May and other non-AI experts, who all said the same thing about AI well over a decade ago. Did Musk really think that if he ploughed enough data into an AI, it would magically resolve all the complex real-world issues of driving?
Why is Musk like this? Well, Musk seems to treat AI as a cognitive machine rather than the statistical model it is. He is part of a movement that seems to think that code and mathematics can solve all of humanity’s woes or even surpass us, and all you need to do is shove as much data as possible into these machines for this to happen. However, this simply isn’t the case. All AI can do is replicate patterns it finds in data. That’s it. It doesn’t understand what it is doing, can’t create anything new, or solve any problem that hasn’t already been solved.
It’s funny that Musk treats his AI projects as more cognitive, self-aware, individual and human than some of his actual kids…
But this misconception was on public display earlier this year when a Godfather of AI called Musk out on his AI bullshit.
It all started when Musk Tweeted (Xeeted?) that AI “will probably be smarter than any human next year” and that “by 2029, AI is probably smarter than all humans combined.” LeCun, one of the men credited with the invention of modern AI, replied, “No. If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17-year-old. But we still don’t have fully autonomous, reliable self-driving, even though we (you) have millions of hours of *labelled* training data.” Ouch!
But it’s true, AI isn’t really intelligent; it doesn’t actually think about what it is doing and is actually incredibly inefficient at learning. As such, any prediction that AI will become super-humanly intelligent is complete bullshit.
With all of this evidence, can we really say that Musk understands what AI is, let alone be a pioneer in the field? He consistently fails to recognise how AI actually works, the limitations of the technology, and the legal and ethical ramifications of its use (I haven’t even discussed the people killed by FSD in this article). But there is a silver lining here. Musk is pivoting almost all of his companies towards AI, and these failures mean that he is effectively sabotaging these companies from within. Tesla is an AI self-driving company with a dangerously flawed product that still isn’t autonomous, and X (Twitter) is a social media company with a built-in defaming, slanderous, copyright-infringing, misinformation machine. Neither of these is a good business model. In effect, Musk’s misguided AI notions are building his own embarrassing tomb where his positive public persona and companies can die in peace.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Forbes, Futurism, Independent, Electrek, Will Lockett, Will Lockett, Will Lockett, Will Lockett