
To say that Tesla’s future is dependent on robotaxis is a bit of an understatement. Tesla has to completely decimate this industry or be left hung out to dry. They have almost entirely stopped investing in and developing new EVs or EV technology, allowing their competition to run away with the market so they can focus on robotaxis. The hypothetical opportunity of this new industry is what has driven Tesla’s stock price to such extensive heights. However, if Tesla cannot meet these lofty expectations, it will be valued exclusively as a car manufacturer, and its value will be dwarfed by its debts, plunging it into the red. This really is sink or swim for Tesla. Yet, after a decade of development, Tesla’s robotaxis utterly suck.
Tesla “soft-launched” its Robotaxi service in Austin, Texas, on Sunday, June 22. However, rather than the promised Cybercabs, we received 12 unmodified Model Ys with ”Robotaxi” scrawled on the side. The service was also invite-only, with only people directly involved with Tesla getting a ride. Additionally, they weren’t alone in the car, as a safety driver is always required in the passenger seat, ready to take over at a moment’s notice.
Yet, even with this puny number of vehicles, a light sprinkle of customers, and a dedicated safety driver on board, over the span of just two days, these robotaxis have been recorded driving illegally and erratically.
These videos have shown robotaxis braking randomly and abruptly as they pass a police car (guilty much?); speeding; randomly jerking their steering wheel; and even swerving into oncoming traffic and driving there for an extended period! The fact that so many concerning videos have come to light in such a short period of time has prompted the NHTSA to contact Tesla directly.
But this shouldn’t come as much of a surprise. These Robotaxis are more than likely running FSD v13. 8,000 miles of crowdsourced data have shown that FSD v13 averages 493 miles between critical disengagements, which means its actual intervention intervals are likely shorter than that.
Let’s assume these 12 Model Y Robotaxis are using their full 230-mile real-world range every day. If you crunch the numbers, that means this fleet would have statistically experienced 12 critical incidents where the safety driver should have intervened between the launch of the service and the time these videos were uploaded.
So, why didn’t the safety driver intervene? Why was the car allowed to drive so erratically?
Simple: these safety drivers are a dangerous marketing stunt.
If you remember, Tesla’s Cybercab was supposed to have no steering wheel or pedals. The only way a passenger could control the vehicle was by pressing a button that would trigger an emergency stop, and once it stopped, they were essentially stuck. It seems Musk has decided to emulate this setup in the Model Y Robotaxis, either to test out the system or, more likely, to generate viral videos of Teslas driving with no one in the driver’s seat. After all, that is how Musk makes money — he markets his companies to investors.
But there is one glaring flaw with this passenger safety driver setup. It simply doesn’t work! Take the incident when FSD got confused and drove into the wrong lane or the numerous times it has been caught speeding. The safety driver can’t take over and steer it into the right lane or gently apply the brakes to get it below the speed limit. All they can do is instigate an emergency stop, and in these situations, that could cause a crash!
This is why when every other robotaxi company used safety drivers (which was years ago now; they have almost all moved past that stage), they put them in the driver’s seat so they could actually take full control when needed.
It’s unlikely that a video recording would be made publicly available for each instance of these robotaxis driving dangerously or illegally. The fact that there are at least four videos of this incident from just two days of operations shows that the 493 miles between incidents calculation is accurate.
Tesla and Musk would have known that this would have happened. They have the data. In my opinion, these passenger safety drivers are a wildly dangerous publicity stunt.
Now, this is all well and good, but why can Waymo reach an average distance between interventions of 17,060 miles, and Tesla is stuck down at 493 miles?
Well, it’s all about approach. Waymo uses a highly redundant and highly constrained system, while Tesla’s FSD has no redundancy and is an unconstrained system.
FSD is a vision-only system. It uses just nine cameras and a single computer vision AI to understand the world around it. Waymo, by comparison, uses 13 cameras, five LiDAR units, and six radar sensors. These Waymos also use computer vision to convert those 13 camera feeds into a 3D map of the world around them. That information is then checked against the 3D maps built by the LiDAR and radar systems. This gives Waymo several layers of redundancy. If the cameras fail due to rain or harsh lighting, or the computer vision AI misinterprets what it sees (which happens frequently), then these anomalies can be identified and disregarded by comparing them to the feeds from the other sensors. It also provides Waymo with a strong safety net. If the radar detects that they are on a collision course, it will slam the brakes on; this system is likely separate from the self-driving system.
Engineers have long recognised the importance of system redundancy and robust safety measures for decades. This is why FSD engineers originally wanted FSD to have multiple sensor types and a far larger sensor suite. However, Musk overruled them (read more here) and, in doing so, actively made the system far less safe. If you want some context on how long engineers have known about the importance of redundancy, I highly recommend Kyle Hill’s video on the Therac-25.
This simple difference alone is enough to put Waymo light years ahead of Tesla. But they also have a fundamental difference in their AI approach.
Tesla is attempting to develop a general-purpose self-driving AI capable of understanding the road and driving anywhere. Meanwhile, Waymo trains its AI specifically for specific locations, incorporating highly detailed 3D maps, road maps, and similar data in its AI training.
This might sound like Tesla has an advantage. After all, once they get the system to work, their robotaxis could theoretically drive anywhere. Meanwhile, the Wyamos operating in Phoenix, for example, are restricted to the location they have been trained for.
However, when you understand how AI works, you quickly realise that this is a massive flaw in Tesla’s design.
The more constrained an AI is, the more accurate it becomes. This is why AIs used to optimise design, decipher protein structures or identify medical issues are breathtakingly accurate, as their focus is highly constrained. Meanwhile, AIs that are broader and built for general use, like ChatGPT, are substantially less accurate. This is because AI is essentially a statistical model that lacks understanding of its own operations, and when there are too many variables, this statistical model can easily break down.
Now, for the past decade, AI companies have tried to solve the issues of these general-purpose AIs by making these models gigantic and pouring a gargantuan amount of data into them. That is what OpenAI did with ChatGPT, and it is what Tesla has done with FSD. However, there is a hard ceiling with that approach, as diminishing returns begin to kick in after a certain point, and we have inconveniently already reached that point (read more here). Furthermore, this approach is so damn expensive that none of these AIs are ever going to become profitable.
So, Waymo’s approach of making their AI location-specific and more constrained not only makes their AI inherently more accurate but also cheaper to build and operate. Meanwhile, Tesla’s broader approach inherently makes their AI less accurate and more expensive to build.
Think of it this way: Waymo can use its cameras and LiDAR to compare its data with the highly detailed 3D map of its operational area, easily and accurately determining its location and, in turn, what it needs to do. Meanwhile, the Tesla AI has less data to work with and needs to figure out its surroundings, read the road, and then determine what it needs to do. Because AIs don’t actually understand what they are doing and just operate as statistical models, it’s obvious that the Waymo approach is the one that will work.
So, why did Tesla opt for the obviously flawed direction? That was the direction Musk forced the project in.
And why is Tesla so far behind? Because Musk doesn’t understand AI at all.
I can hear the Tesla fanboys in the background — “It’s still early days.” No, it isn’t. Tesla has spent over a decade and more than $10 billion on FSD. And after all of that, and even with a safety driver in the car, they can’t even drive legally, let alone safely. It’s an utter embarrassment.
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: BBC, Reuters, Engadget, Techradar, Will Lockett, EV Database, Forbes, Will Lockett, Will Lockett, Teslarti, TV, NATA, Electrek
Yea, but none of the robotaxis burst into flames! Yet.
I just watched that Therac 25 video. Yikes. Back in the days when computers were dumb, but the operators were dumber. Great story, though!!
Seeing that DOS cursor gave me flashbacks...