AI Has A Massive Human Problem
AI doesn't actually replace human workers; it just makes their lives miserable.
Do you remember Amazon’s “just walk out” grocery stores? The idea was that facial-recognition cameras, shelf sensors, and AI would track what items a customer has taken, then charge their Amazon account once they left, negating any need for a cashier or self-check-out. This innovation was hailed as one of the first cases of AI directly replacing human workers and a way to lower the cost of operating a store. But, in reality, it really wasn’t. A recent report found that over a thousand remote workers had to be hired to monitor the video feeds and verify 70% of the customer’s purchases, as the AI was consistently getting it wrong. This amount of labour isn’t cheap, even if it is outsourced overseas, and Amazon’s “just walk out” AI became significantly more expensive than simply hiring regular cashier staff. As such, Amazon has struggled to sell the system to third parties and has had to switch its own grocery stores to a fancy non-AI self-scan system instead. This tale is far from unique in the AI world, but it perfectly highlights a massive problem with AI that no one is talking about. It simply cannot fully replace humans, even in the most simple of tasks. Let me explain.
When I say this tale isn’t unique, I mean it. Cruise’s self-driving cars need remote workers to intervene every two and a half to five miles to keep the AI from making potentially dangerous road-going mistakes, and multiple AI-powered “smart assistants” actually have humans managing the more complex queries. But why?
Well, to put it bluntly, even the most advanced AIs aren’t reliable enough to be trusted to make decisions on their own. The algorithms behind them can find trends in data that simply don’t exist in real life, misinterpret trends, or combine trends incorrectly, leading to undesirable outputs. You can see this issue clearly with generative text AIs, as all of them still have problems with making up incorrect facts. These issues can be mostly resolved by training the AI on larger datasets (to learn more about AI training click here), but It’s also not as if this problem isn’t widely recognised, as Harvard has found that AI should only be used to help inform human decisions, not make decisions on behalf of humans explicitly because of this lack of reliability.
As such, any AI that promises to replace even the most basic human jobs, like Amazon’s “just walk out” stores or Tesla’s self-driving, should be taken with a massive pinch of salt. The evidence and science are pretty clear that they can’t be relied upon yet, and still require serious supervision.
But, even if we developed some sort of check system to ensure these undesirable outcomes don’t happen, AI would still have a serious human problem. You see, making an AI more accurate and more useful doesn’t just require more data for it to be trained on. It requires more of the right type of data; otherwise, it can go haywire. OpenAI found this out with ChatGPT3, which had a tendency to generate hateful, racist, sexist and violent remarks. So, for ChatGPT4, they decided to screen its training data, ensuring that the AI model wasn’t trained on the type of text they didn’t want to replicate. They didn’t do this task in-house, but instead hired a small army of remote workers in Kenya for $2 an hour and got them to filter through the texts and remove anything offensive. Because OpenAI scrapped much of this text from the worst corners of the internet (mainly to avoid copyright issues), some of it described horrific situations in graphic detail, like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.
Subjecting a person to such mentally damaging content for $2 an hour can hardly be described as ethical. However, paying these people an ethical wage would make such AI models economically unviable. What’s more, like every other AI company out there, OpenAI’s plan to make their models more capable and more reliable is to simply feed the AI exponentially more data. This means that solving the human supervision problem of AI requires exponentially more dire exploitation of humans.
This is hardly the utopian AI future we were promised, is it?
AI itself isn’t bad. It can be incredibly powerful when used to augment humans, not replace them. I myself use AI search engines, AI spell checkers and AI image editing. Meanwhile, AI diagnosis tools in medicine are helping doctors treat some of the hardest-to-cure diseases known to man, and AI CAD software is assisting engineers to create some of the most efficient designs humans have ever come up with. These AI models need so little data that they don’t require the heinous exploitation of remote workers to shift through reams of potentially psychologically damaging data, and they are explicitly designed and used to increase human productivity, not replace humans.
So, why are AI companies spewing out hollow promises to replace humans? Well, it gains the stock market. Investors, particularly investment banks, seem to have decided that AIs will be able to do jobs cheaper than humans. Therefore, any company that can create job-replacing AIs could be worth billions. They then fight over the shares and drive the stock price up. Sadly, as I have covered before (read here), AIs are so expensive that in the vast majority of cases, they aren’t more cost-effective than hiring a human, and as we have covered today, even if they are, you still have to hire a vast number of people to keep the AI in check, yet again rendering it more expensive than hiring humans in the first place. But these AI companies and institutional investors don’t care. They still make money. Because they buy when the stock price is low, sell when the stock price is high, and wash their hands clean before the crushing weight of reality hits.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: The Guardian, Euronews, HBR, Time