
Tesla is taking an absolute beating at the moment. Over here in the UK, they have halved their lease prices in a desperate bid to drum up more customers, but despite these insane prices, no one is buying them. I guess we Brits don’t want to be associated with these neo-Nazi-mobiles. But Tesla’s woes go far deeper than its nosediving sales figures. The only reason Tesla is still remotely valuable is because investors hope it will crack automation. These hopes have been significantly dented by Tesla’s terrible Robotaxi rollout, which so far has done nothing but prove their Full Self-Driving (FSD) system is utterly woeful. But this hollow narrative could soon be completely destroyed, thanks to a giant of the digital ethics world, Professor Luciano Floridi, who may have just proven that it can never work.
To understand this, we need to understand how FSD differs from other self-driving systems.
Almost all autonomous cars are set up in a similar manner to Waymo. The vehicles have a huge array of cameras, ultrasonic sensors, radar and lidar to sense the world around them. They also run multiple systems using those sensors. For example, an automated braking system, which can override the driving AI, will run using the lidar, radar or ultrasonic sensors. There may also be several AIs running, where one might use the cameras and computer vision, while the other uses lidar. But these AIs don’t just use the vehicle’s sensors — they also use detailed 3D maps of their operational area to figure out where they are and what they need to do in their location. That way, the burden placed on the AI to read the road and distinguish which actions to take is substantially reduced. All of these systems are then combined and work together to operate the vehicle, allowing anomalous actions, AI hallucinations, or sensor errors to be mitigated. These layers of redundancy and ‘hard’ safety nets, such as the separate automated braking system, don’t just make the vehicle safe but reduce the demands on AI performance, making it easier and cheaper to develop the vehicle into a usable self-driving car.
Tesla has done away with any notion of system redundancy or safety nets. Instead, FSD has a much more naked approach.
Their vehicles use just nine cameras and AI computer vision to sense the world around them. That’s it. No separate automated braking system. No lidar or radar to provide a backup sensor type if the lighting screws with the cameras. Furthermore, Tesla doesn’t use 3D maps to give the AI an understanding of where it is. This AI has to read the road with no help. This has the advantage of being a far cheaper system to install, but there is no system redundancy and no safety nets. For this to be even remotely safe, the AI has to be nearly 100% accurate. And even then, there is the risk that conditions will obscure the camera’s vision, rendering the system unsafe.
So, the question is, can anyone, let alone Tesla, make an AI capable of being totally reliable in such a broad application as driving?
Quick side note: you might think driving is a pretty constrained task. There are set rules on the road after all. But it isn’t — there are loads of “edge cases”. These are novel situations that the AI has never encountered before. After all, when driving, we have to deal with the chaos of weather, other humans, insufficient road signage, and even wild animals. AI really struggles with these external factors, and it means driving is actually an incredibly broad application for this technology. In other words, it has to be damn good at a lot of different things at the same time.
This is where Floridi’s Conjecture comes in. I have to thank Prof. J. Mark Bishop for telling me about this.
Keep reading with a 7-day free trial
Subscribe to Will Lockett's Newsletter to keep reading this post and get 7 days of free access to the full post archives.