AI Has A Serious Skill Problem
But the solution might surprise you.

A research charity recently found that up to three million low-skilled jobs could be lost to AI in the UK by 2035. There are a plethora of studies like this floating around at the moment, claiming huge swathes of the global job market will be replaced by AI, but they all seem to miss something utterly crucial — the impact this kind of AI rollout will have. You see, AI has a major skill problem. It’s not that the AI isn’t skilled (though it definitely isn’t, but that is a conversation for another day). No, it is more that it greatly exacerbates the already dire issues with skill in the modern economy. This AI rollout won’t decouple economic growth from labour, as it is proclaimed to do. Instead, it will deskill labour across the board and deal a catastrophic amount of economic damage. Let me explain.
AI only really increases productivity for “low-skill” jobs, such as taking meeting notes and providing customer service.
After all, AI gets things wrong constantly. The errors it makes are hilariously called “hallucinations”, even though that is just a blatant PR attempt to anthropomorphise the probability machine. But these errors make using AI to augment skilled tasks incredibly difficult. Ultimately, it takes a skilled worker a tremendous amount of time and effort to oversee AIs used in this way, identify their errors, and correct them. In fact, the time and cost wasted overseeing the AI are more often than not greater than that saved by the AI. This is one of the main reasons MIT found that 95% of AI pilots failed to deliver positive results and why METR found that AI coding tools actually slowed skilled coders down.
This problem isn’t going to go away either. Data scientists have known for years that AI is just a probability machine with diminishing returns and so will always have a probability of getting things wrong. In fact, OpenAI recently admitted in its latest research paper that more data and more computational power won’t reduce the level of AI “hallucinations”, and that there is currently no viable way to do so either.
However, these hallucinations aren’t as much of a problem for these “low-skill” jobs and tasks. The people doing these jobs are often inexperienced and make errors, but that is okay, because the work is then fed to skilled workers down the line. In these applications, the AI smooths out the output of these low-skilled workers, making it easier for the high-skilled workers to utilise. For example, an AI taking meeting notes might make some mistakes, but it might make fewer than someone with poor corporate language experience.
Okay, so AI can augment or automate these “low-skill” tasks or jobs, then?
Well, it can. But it causes a lot of damage, thanks to cognitive offloading, skill erosion, and blocking expertise generation.
Let’s say a skilled worker uses AI to automate a “low-skilled” task. The cognitive load of such a task has been taken away, and their productivity increases. Great!
But, like a muscle, expertise needs to be used to stay strong. As such, this worker will lose expertise in this area and become more reliant on the AI. Unfortunately, this means that they will lose the expertise needed to find and correct the AI’s hallucinations. Then, damage is done downstream as these errors are passed on.
This problem is made even worse if all skilled workers within an organisation use AI in this manner, as it isn’t just an individual’s skills that are being eroded, but an organisation’s collective expertise. Realistically, if AI is rolled out organisation-wide, it can completely destroy necessary expert knowledge. In fact, studies have already found evidence of this happening.
Needless to say, the long-term damage of such skill erosion can be devastating. But it is compounded even further by the fact that AI companies update the AI models, which can make the old prompts no longer work. If the skill erosion is severe enough, then a worker or organisation might not have the expertise left to figure out new prompts that do work.
A little side note here: the reason I put “low-skill” in quotation marks is because “low-skill” work doesn’t exist. What we consider low-skill, like data entry or customer service, often requires critical expertise on how an organisation runs, some kind of talent with software, or serious people skills. That is why even “low-skilled” workers still require significant onboarding to a new job. But I also find that the term “low-skilled” is poorly defined, and lots of jobs with remarkably high technical skill are classified as low-skilled and are being actively replaced by AI.
So, even if you try to only augment or automate “low-skill” tasks and jobs with AI, this skill erosion can cause a potentially fatal destruction of expertise.
Okay, but what about augmenting people in “low-skill” jobs? The data shows that this can improve productivity.
Well, yes, but the studies also show that the cognitive offloading of these tasks prevents these workers from gaining expertise and becoming skilled. Let’s not forget that the intern or graduate taking meeting notes is learning how a corporation works and is on the path to becoming a manager or executive. The vast majority of highly skilled workers start in these positions, and the expertise they gain from them is key to their progression.
So, by automating or even just augmenting “low-skill” jobs, an organisation is stifling its internal talent and preventing the development of internal expertise. It is not an exaggeration to say that this can, and has, killed businesses. If an entire industry uses AI like this, it could cause a severe economy-wide talent drought that will deliver enormous damage and take years to resolve.
Basically, AI in the workplace is nowhere near as good a productivity tool as promised, and this problem isn’t going to get fixed any time soon. The only places AI can be deployed to increase productivity also destroy critical skills and severely damage an entire organisation or industry.
And AI’s skill issue goes even deeper than this.
You would think that managers and executives would be able to see this problem with AI. Surely because they have expertise in their jobs, they would be able to recognise that AI isn’t an effective productivity tool and that it is making their workers dumber? But they don’t.
For decades, studies have found that managers and executives often lack the technical skills required to fully understand the jobs they manage. Instead, they are experts in “efficiency”. Indeed, many of them have business degrees and no tangible or up-to-date experience in the sector they preside over.
This means that they see these AI tools through the same lens as the “low-skilled” workers. They lack the expertise to identify when these tools are getting things wrong and the damage they can cause. This also means they are ill-equipped to notice when their workforce is losing valuable skills until it is too late.
In other words, many of the organisations that implement AI to augment or automate these “low-skilled” jobs are blind to its issues and will only discover something is wrong when it is too late to stop the skill erosion and the damage has become painfully evident.
Okay, so what is the solution?
Well, we can take the time to recognise AI’s limitations and its impact on skills. But that isn’t the core issue here.
AI is somehow laser-focused on manipulating the failings of our modern economy in how it treats skill. Modern highly vertical corporate hierarchies separate decision-makers from expert workers, making organisational decision processes not only less equitable but also less informed. This, combined with a ruthless drive for short-term ‘efficiency’ gains, also means that corporations are not nurturing workers and enabling them to become skilled. In this economic and corporate landscape, AI can camouflage its damaging shortcomings, embed itself, and become a cuckoo in the nest. No level of AI awareness can counteract this structural weakness.
So, the solution is obvious. Workers need to unionise so that decision-makers have a unified voice of experience to make them aware and hold them to account. Organisations need to adopt a more horizontal structure, bringing decision-makers and workers closer together and working collaboratively, not hierarchically, making the decision process more informed and more equitable. Corporations need to stop gamifying their stocks and focusing on the next quarter and instead concentrate on deep, long-term growth to justify investing in their workers. One way to do that is not to go public, or even better, to become a workers’ co-operative.
If these reforms are adopted across the globe, then our economy would be protected from AI skill erosion and its ruinous effects. It can also focus on delivering actual productivity gains, be far more sustainable, and be far more equitable.
The problems of AI are not about AI at all but about how broken our modern economy is. AI just holds up the mirror so we can notice the cracks. The question is, are we brave enough to look in the mirror and admit we are the ones at fault?
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: The Conversation, Will Lockett, The Guardian, A!, Redline, HBR, NA, OpenAI, AIS


Amazing what Milton Friedman and Jack Welch have done for capitalism. Making greed and monopolies good and making the corporation beholden first and foremost to the c-suites and the shareholders is now exposing where unfettered capitalism leads. But more to the AI point. Ed Zitron's recent podcast had a guest that described some of the shortcomings of AI for coding that isn't very different from some of your points here. Basically, AI starts each task as a tabula rasa whereas skilled and "low skilled" workers build and learn and grow from each successive task. His guest also talked about how AI will simply cut and paste (copy) code blocks over and over partially as a result of this, and partially because that is simply how they work...currently.
My question is, why can't AI be trained to use and reuse subroutines? Why can't it be trained to retain and promote recently written code. It doesn't even seem like that would be very difficult actually. It seems like we are too focused on the MIT study results and the current state of what AI can do. This is troubling to say the least. Perhaps the only saving grace is that the AI bubble may well pop before too long since, as you've stated many times, the circular investing can't last forever, or even much longer (I hope).
The planet's political and economic landscapes need a serious reboot, and it is assuredly coming. The only questions are when, and who and how many will it decimate. Thank you for your great pieces. I read them all and share them liberally. Here's to manifesting a better future where the musks, zucks, altmans and trumps share hitler's rank in the annals of infamy.