The Dawn Project Is Right: Musk's AI Is Dangerous By Incompetence
Or, at the very least, negligence.
Apparently, you Americans have this hugely popular thing called the Super Bowl. As a Brit, I have to say it just sounds like you guys plagiarised the FA Cup Final or the Ashes but changed it up just enough so no one would notice. Either way, the effect is the same: the country grinds to a halt as thousands of lucky fans flock to watch the game in person, and millions of people eagerly tune in. One such lucky fan was Elon Musk, who had scrimped and saved for months to afford some of the best seats in the house and a private jet to take him there. So, he must have been royally pissed off that one of the Super Bowl TV advertisers was The Dawn Project, whose ad displayed Teslas under Autopilot running over dummy kids, strollers, swerving into oncoming traffic, failing to stop at stop signs, and called for people to boycott Tesla to keep their families safe. But these adverts didn’t tell you why Tesla’s self-driving software is so damn dangerous or the extensive rap sheet it is rapidly building. So, let me explain.
If you dig deep into a Tesla’s user manual, you will find that Tesla’s Autopilot self-driving AI is explicitly only “intended for use on controlled-access highways” with “a centre divider, clear lane markings, and no cross traffic.” In other words, they are only to be used on open freeways. Those of you who own a Tesla or have been listening to Musk’s ramblings for years might be surprised at this. Musk has displayed countless demos of Tesla’s driving in sprawling cities and country roads, far from what the user manual describes as an acceptable use case. He has also exclaimed time and time again that Teslas can drive themselves and that drivers are only there for legal reasons. Again, this is not backed up at all by the user manual at all. This should be very telling, as there are stringent laws and precedents around what a user manual should say and very little around the rhetoric Musk can push around the system.
Okay, so why doesn’t Tesla use GPS to geo-restrict its self-driving software to only operate on freeways? That way, they can ensure it is only used as described in the user manual.
Well, Tesla uses data gathered when Autopilot is engaged to train its AI. So, if Musk ever wants Tesla to be able to drive across junctions, in cities, or on country roads safely and reliably enough to be legally autonomous, he needs Tesla owners to use the AI in these situations that it isn’t ready for yet. That way, it can learn through trial and error, with the driver being the safety net to catch the system when it goes wrong.
But, because Tesla is advertising its self-driving system as practically fully fledged already, many Tesla drivers don’t know this and are happy to not pay attention and let these potentially fatal errors happen. To make things worse, because of idiotic design choices by Musk (which we will come onto later), Tesla needs way more AI training data than any other self-driving company, meaning that they can’t take this training task in-house with trained drivers, and instead 100% relies on public testing.
This is why The Dawn Project is so worried about Tesla’s self-driving program. Their founder, Dan O’Dowd, created the military-grade software for Boeing’s 787s, Lockheed Martin’s F-35 Fighter Jets, the Boeing B1-B Intercontinental Nuclear Bomber, and NASA’s Orion Crew Exploration Vehicle. So, he knows that software can be massively dangerous if it goes wrong and understands how to ensure it is as reliable and as safe as possible. He started The Dawn Project to ensure civilian systems have the same level of rigorous safety, and right now, he sees Tesla as one of the highest-risk pieces of software (AI or not) out there.
But it isn’t just The Dawn Project that is worried about Tesla’s AI.
A Washington Post report found eight recent Tesla crashes that occurred while Autopilot was engaged, many of which were fatal, were on roads where Autopilot shouldn’t have been enabled. They even obtained dashcam footage of Tesla’s running red lights and stop signs or failing to read the road accurately while using Autopilot.
The Department of Justice has also opened up a probe into tens of crashes, again many fatal, in which Autopilot was being used inappropriately. They have now subpoenaed Tesla to hand over crucial data around these crashes to raise a lawsuit against Tesla for possible negligence and fraud charges, as they view Tesla and Musk as miss-selling their self-driving system.
The NTSB has also noted the staggering number of Teslas crashing while self-driving in locations they should not and has called for geo-restriction limits to ensure they are only used on roads they are safe on (i.e., freeways). Sadly, the NTSB has no legislative power, and Tesla and the authorities have staunchly ignored their warnings in a similar pattern to when the NTSB called for laws to require seatbelts and airbags in all new passenger vehicles decades ago.
As I said above, Tesla can’t use geo-restriction to ensure the safe use of Autopilot. Its business model and AI development program are 100% reliant on customers testing and using the system in places it shouldn’t be. But why?
Back in 2021, Musk went against his AI engineer’s advice and moved Autopilot to visual-only in a move I can only describe as utterly incompetent. Beforehand, Tesla AI used a plethora of sensors to understand the world around them, including ultrasonic sensors, radar and camera arrays. This gave the AI an easy way to interoperate what was happening around it and gave the system redundancy. If one sensor failed or misread, the system had a way of verifying or mitigating the problem. But these sensors are expensive, and in early 2021, Tesla’s margins were shrinking as competition started to encroach. So Musk scrapped the sensors to save money and moved the system to use only an array of cameras to sense the environment.
Computer vision like this is still in its infancy and is far from reliable. It is prone to misread, as the AI that interpolates the video data into categorised spatial data is incredibly complex and insanely challenging to make reliable. In comparison, other sensors like radar, lidar and ultrasonic use simple algorithms to create the spatial data needed for the self-driving AI. But even if you made the computer vision AI reliable, it should still not be the sole sensor for a system like this. Simple environmental factors like dirt on lenses or sudden bright light can render the cameras unusable, leaving the AI with no data, or inaccurate data, rendering it useless and dangerous.
Nearly a dozen former employees, test drivers, safety officials, and other experts reported an increase in crashes, near-misses, and other embarrassing mistakes by Tesla vehicles after they were deprived of these critical sensors. As Tesla has a habit of suing the pants off any whistleblowers or criticisers, the number of people inside Tesla that saw the detrimental effects of Musk’s decision is likely far larger. Interestingly enough, these insiders also reported that Musk rushed the release of Full Self-Driving (which I have been referring to as Autopilot) before it was ready and that, according to former Tesla employees, even today, the software isn’t safe for public road use. In fact, a former test operator went on record saying that the company is “nowhere close” to having a finished product.
But why are they so far away from a final product? Well, it has to do with data.
Computer vision self-driving AI requires far more data to be trained than those utilising lidar and other sensors. Musk has recently said that he requires millions upon millions of data cases and many AI iterations to get his cars fully autonomous (a rare case in which he admitted they aren’t there yet). Tesla simply doesn’t have this data or computing power yet. Meanwhile, self-driving companies using other sensors like lidar and radar require far less data. This allows them to bring their self-driving development in-house and use specially trained drivers to take the risk of using it in the wild. Whereas Tesla is locked-in to using the public, as only then can it access enough data to develop its AI properly.
In other words, Musk requires the public to continue taking deadly risks as his unfinished AI continues to run rampant, crashing and killing innocent people for him to cash in billions of dollars on the self-driving hype train. His hands are bloody and are set to get bloodier. It’s no wonder The Dawn Project wants the public to realise what Tesla and Musk are doing.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: CNN, The Dawn Project, Planet Earth & Beyond, Planet Earth & Beyond, Will Lockett, Will Lockett, Fortune, Will Lockett