Thank you for the Florian reference. I had long believed that with a constrained problem set (eg reading medical scans or finding fraud in medical billing data) ai and should be great and obviously the larger the data set and set of questions the worse it performs. Now I have a name for that
This: "But, with a Tesla, this system is handled by the driving AI and uses sensors that require computer vision AI to interpret. It is a wildly broad scope, and therefore it cannot become reliable. And I think you’ll find that reliable emergency braking is a pretty important ‘must-have’ in an autonomous vehicle!"
Indeed, and AI interpretation takes time, where hard sensors operate at wire speeds.
I have always disliked the generative AI thing because it seemed like statistical BS generation. Florian has put in formal logic what my intuition was. If this implodes tesla and XAi and openAI i will be happy
Thank you for the Florian reference. I had long believed that with a constrained problem set (eg reading medical scans or finding fraud in medical billing data) ai and should be great and obviously the larger the data set and set of questions the worse it performs. Now I have a name for that
* Floridi
We
Can probably call call it Florian uncertainty principle
This: "But, with a Tesla, this system is handled by the driving AI and uses sensors that require computer vision AI to interpret. It is a wildly broad scope, and therefore it cannot become reliable. And I think you’ll find that reliable emergency braking is a pretty important ‘must-have’ in an autonomous vehicle!"
Indeed, and AI interpretation takes time, where hard sensors operate at wire speeds.
I have always disliked the generative AI thing because it seemed like statistical BS generation. Florian has put in formal logic what my intuition was. If this implodes tesla and XAi and openAI i will be happy