
Tesla is taking an absolute beating at the moment. Over here in the UK, they have halved their lease prices in a desperate bid to drum up more customers, but despite these insane prices, no one is buying them. I guess we Brits don’t want to be associated with these neo-Nazi-mobiles. But Tesla’s woes go far deeper than its nosediving sales figures. The only reason Tesla is still remotely valuable is because investors hope it will crack automation. These hopes have been significantly dented by Tesla’s terrible Robotaxi rollout, which so far has done nothing but prove their Full Self-Driving (FSD) system is utterly woeful. But this hollow narrative could soon be completely destroyed, thanks to a giant of the digital ethics world, Professor Luciano Floridi, who may have just proven that it can never work.
To understand this, we need to understand how FSD differs from other self-driving systems.
Almost all autonomous cars are set up in a similar manner to Waymo. The vehicles have a huge array of cameras, ultrasonic sensors, radar and lidar to sense the world around them. They also run multiple systems using those sensors. For example, an automated braking system, which can override the driving AI, will run using the lidar, radar or ultrasonic sensors. There may also be several AIs running, where one might use the cameras and computer vision, while the other uses lidar. But these AIs don’t just use the vehicle’s sensors — they also use detailed 3D maps of their operational area to figure out where they are and what they need to do in their location. That way, the burden placed on the AI to read the road and distinguish which actions to take is substantially reduced. All of these systems are then combined and work together to operate the vehicle, allowing anomalous actions, AI hallucinations, or sensor errors to be mitigated. These layers of redundancy and ‘hard’ safety nets, such as the separate automated braking system, don’t just make the vehicle safe but reduce the demands on AI performance, making it easier and cheaper to develop the vehicle into a usable self-driving car.
Tesla has done away with any notion of system redundancy or safety nets. Instead, FSD has a much more naked approach.
Their vehicles use just nine cameras and AI computer vision to sense the world around them. That’s it. No separate automated braking system. No lidar or radar to provide a backup sensor type if the lighting screws with the cameras. Furthermore, Tesla doesn’t use 3D maps to give the AI an understanding of where it is. This AI has to read the road with no help. This has the advantage of being a far cheaper system to install, but there is no system redundancy and no safety nets. For this to be even remotely safe, the AI has to be nearly 100% accurate. And even then, there is the risk that conditions will obscure the camera’s vision, rendering the system unsafe.
So, the question is, can anyone, let alone Tesla, make an AI capable of being totally reliable in such a broad application as driving?
Quick side note: you might think driving is a pretty constrained task. There are set rules on the road after all. But it isn’t — there are loads of “edge cases”. These are novel situations that the AI has never encountered before. After all, when driving, we have to deal with the chaos of weather, other humans, insufficient road signage, and even wild animals. AI really struggles with these external factors, and it means driving is actually an incredibly broad application for this technology. In other words, it has to be damn good at a lot of different things at the same time.
This is where Floridi’s Conjecture comes in. I have to thank Prof. J. Mark Bishop for telling me about this.
Luciano Floridi is a professor at Yale and is the Founding Director of the Digital Ethics Centre. So, as you can imagine, he knows a thing or two about AI and its limitations. But, in his recent paper, Floridi put forward his conjecture that AI systems can either have great scope but no certainty or a constrained scope and great certainty. Crucially, Floridi’s Conjecture states that an AI absolutely can’t have both a great scope and great certainty.
Another way of wording this is that as an AI application becomes broader, it will always become less accurate and will suffer anomalous outputs (stupidly known as “hallucinations”).
Floridi didn’t just pluck this idea out of thin air. It is based on the mathematics of how neural networks function.
So, with this conjecture as context, let’s reappraise Tesla’s FSD system.
In the past, I have pointed out how the efficient computer frontier, which essentially states that improving AI is a Sisyphean task, renders Tesla’s self-driving approach useless (read more here). But Floridi’s Conjecture totally dismantles FSD’s validity.
Take the AI automated emergency braking system. In Waymos and other self-driving cars, this system is separate from the driving AI and uses a select few ‘hard’ sensors that don’t need interpretation, like cameras do, to establish what is going on. This is a highly constrained application, meaning this AI can be optimised to become verifiably reliable. That is a key step in getting autonomous vehicles safe enough to operate on public roads.
But, with a Tesla, this system is handled by the driving AI and uses sensors that require computer vision AI to interpret. It is a wildly broad scope, and therefore it cannot become reliable. And I think you’ll find that reliable emergency braking is a pretty important ‘must-have’ in an autonomous vehicle!
And it goes even further than this. Using those 3D maps and multiple sensor types severely constrains the task for the driving AI. It has far less to interpret and far fewer variants to take into account. As such, according to Floridi’s Conjecture, it can become more reliable. But without this additional data, Tesla’s FSD has a lot more to do and a much broader application, meaning the driving AI can’t be as reliable, also according to Floridi’s Conjecture.
Here is the especially crucial element. Floridi’s Conjecture states that no matter how much additional data, training or neural network optimisation, there will always be a trade-off between scope and reliability.
In other words, it doesn’t matter how many billions of dollars Musk dumps into FSD development — because it lacks any attempt to constrain the AI’s task, it will always suffer from erroneous outputs, be unreliable, and ultimately, be unsafe.
This is a huge problem for Tesla. Their current valuation is entirely based on the fantasy that they will unlock autonomous driving and dominate that market. In fact, if you exclusively value Tesla as a car manufacturer (in terms of their sales numbers and profit margin), the company is actually worth less than its debts. If Tesla’s self-driving bubble bursts, the company faces collapse (read more here). And Floridi’s Conjecture threatens to do exactly that.
Now, some have asked whether Floridi’s Conjecture proves Tesla’s negligence, as it loosely implies that Musk is selling a deadly product. But I don’t think that argument can be used in a court of law, and I also don’t think we need it to prove Tesla’s negligence. There are dozens of cases of FSD killing people because Musk and Tesla marketed FSD as an entirely safe autonomous system when it wasn’t. In fact, before Trump took office, the DoJ was investigating and moving to charge Tesla for wire fraud and possible manslaughter over these issues. Who knows, now that Trump hates Musk, maybe they’ll pick this case back up.
My point being, we don’t need this conjecture to condemn Tesla. But it does prove that Musk is an idiot who knows nothing about the science behind AI and is therefore, in true Dunning-Kruger fashion, leading Tesla down a dead end.
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: SSRN, Will Lockett, Will Lockett, Will Lockett, Will Lockett
Thank you for the Florian reference. I had long believed that with a constrained problem set (eg reading medical scans or finding fraud in medical billing data) ai and should be great and obviously the larger the data set and set of questions the worse it performs. Now I have a name for that
This: "But, with a Tesla, this system is handled by the driving AI and uses sensors that require computer vision AI to interpret. It is a wildly broad scope, and therefore it cannot become reliable. And I think you’ll find that reliable emergency braking is a pretty important ‘must-have’ in an autonomous vehicle!"
Indeed, and AI interpretation takes time, where hard sensors operate at wire speeds.