AI Deskilling: We Warned You.
Smart machines, dumb users?

We have all heard of AI brain rot, AI psychosis, and AI slop. If you spend any time online, it’s quite obvious that the combination of social media and AI isn’t exactly healthy for your neurons. What isn’t talked about as often is the mental impact of using AI at work, despite it being potentially more damaging. Thankfully, this issue is now beginning to receive headline coverage. But most publications fall short of explaining why using AI at work can be so harmful and totally neglect to mention that we were warned about all of these issues from the start. Welcome to the world of AI deskilling.
Business Insider recently published one of these articles. The piece profiles Josh Anderson, a highly experienced software consultant, who shared his experience developing a new app, Road Trip Ninja. He conducted a little experiment and tried to get AI to write the entire codebase. Initially, things went great, but as the code ballooned past 100,000 lines, and interactions with the chatbot grew from minutes to hours, Anderson became increasingly frustrated when progress slowed to a halt.
Of course, this was just an experiment; Anderson could have stepped in at any time and coded the app himself, even if sorting out such a huge block of AI-generated code with very few in-code comments is insanely difficult. But Anderson’s experience highlighted a glaring problem. You see, even Anthropic has found that using generative AI coding tools dramatically reduces a coder’s skills in debugging and code comprehension. So, with the direction the software industry is heading, could a coder actually step in and finish what the AI couldn’t? This article explains that Anderson’s experience “raised questions about the real impact of AI on skill retention and development” and that it “highlights a broader concern among workplace researchers: the risk of deskilling in an environment increasingly reliant on AI.”
Given that this is an industry-wide issue, the article didn’t just look at Josh’s experiment. It also highlighted that developers admitted to finding tasks considerably more challenging during Claude’s recent outage, which rendered their AI assistant useless and indicated a “dangerous dependency.”
This issue has a variety of names and similar explanations that the article quickly glances over.
For example, Josh Nosta calls it the “AI rebound effect.” He describes it as when an AI-driven increase in productivity hides a decrease in skill levels. As he put it:
“When automation handles the details, situational awareness dulls. And in that context, we scan less, anticipate less, and make fewer micro-adjustments. Simply put, the mental models we rely on to navigate complex situations shrink because the system is doing what we once did ourselves. Over time, this isn’t just about pausing a skill; it may be more akin to erosion. And when the technology steps away, the skill doesn’t simply return to baseline. It can come back lower.”
In other words, skill and expertise are muscles that need to be trained to be maintained or they will be lost. So, automating these decisions with AI can lead to us losing critical skills.
Dr. Rebecca Hinds calls this “cognitive debt.” As I covered in a previous article, Dr. Hinds is equally worried about the atrophying of critical skills. She discovered that if AI is used as a shortcut to automate tasks, increase work scope, or reduce workforce size, workers will lose critical expertise and skills because they aren’t being maintained and reach a dangerous level of false confidence, increasing the likelihood that mistakes will be missed. Dr. Hinds instead suggests that AI should be used in tandem with experts, providing them with options while ensuring the expert remains the one making the decisions. Sadly, that is not how AI is being used, and whether using AI like this genuinely increases productivity is still questionable to many.
Colloquially, this issue is known as AI deskilling. This is commonly understood to be when AI is used to automate or augment workers and it shoulders most of their cognitive load. But it is that load that builds and maintains these workers’ critical skills. So, deploying AI in this manner inherently erodes critical skills in a workforce.
This isn’t a hypothetical. We have known about this for quite a while.
For example, there is the 2023 study from JYX, which analysed how automation in an accountancy firm directly led to skill erosion and a marked reduction in critical thinking skills (complacency) that was negatively impacting the business. Or the early 2025 study by Carnegie Mellon, backed by Microsoft, which surveyed 319 “knowledge workers” and found that generative AI automation and augmentation caused a serious loss of critical skills and critical thinking. Or what about this more recent study, which found that generative AI augmentation and automation in medicine are eroding physicians’ critical skills, meaning performance would drop below their previous non-AI baseline if AI were removed?
If you think all of this means AI can still be used to automate or augment low-skill jobs, read my previous article to find out why this simply isn’t true.
Okay, but why does it matter that these workers are losing their skills and expertise? Take the physician example. This study found that generative AI augmentation actually improves the physicians’ efficiency. So, does it matter that these skills are gone if they are no longer needed?
Well, yes, it does matter, because this expertise and these skills are still very much needed and haven’t been truly replaced.
Firstly, you can’t rely on AI. Take the Claude example from before. These AIs sometimes experience outages. A marked decrease in performance or capability during those times could be unacceptable. Imagine if a physician struggled to diagnose patients because ChatGPT was offline! But these AIs are also regularly updated or have their functionality tweaked, which can interfere with how they integrate into a work environment, causing workers to need to use their critical skills to address the situation.
Then, there is the issue of training these AIs. To operate in these fields, they need to be constantly trained on an enormous amount of detailed data to ensure the models are as accurate and up-to-date as possible. These skilled experts provide that data, but if they are being deskilled, where is that data going to come from? There is a possibility that deskilling a workforce could make these AIs noticeably worse.
Let’s also not forget that these AI companies are not profitable and could go bankrupt and disappear in the not-too-distant future. We really shouldn’t make ourselves dependent on them.
But we also seem to forget that AI is not a complete solution and that AI augmentation doesn’t mean these skills are not required. We saw this clearly with Josh Anderson. Anderson had to put in exponentially more effort to get the AI to complete the task because it was incapable of doing so on its own. It really required someone to come in and independently finish the job, which calls for every bit of coding skill needed to comprehend the code, follow its logic, debug issues, connect the separate parts together correctly, and make the whole thing efficient. In other words, the deskilling might go totally unnoticed until the day these skills are required to finish a critical task, which then can’t be completed.
This was obvious during Amazon’s recent outages. I covered this topic in a previous article, but Amazon has laid off a significant number of engineers and has effectively tried to replace them by augmenting the engineers it has left with AI. However, it turns out many of these laid-off engineers were highly skilled in preventing and resolving outages, and this expertise was lacking in the remaining teams. This has created a wave of gigantic, frequent, and extremely costly outages. Who would have guessed?
Oh, and all of these attempts are likely in vain. Remember that Anthropic study from before? It found that the productivity gains from using AI coding tools were “failing to reach statistical significance.” So, at least in some industries, there is no measurable payoff to offset the huge downside of deskilling. It is a lose-lose situation. Generative AI coding tools are more expensive, but they fail to deliver a boost in productivity while also causing the coders to rapidly lose critical skills that are still required… Sounds like a bum deal to me.
But is this really an industry-wide issue?
Well, for developers, it might be. Everything we have discussed could explain the recent findings of METR. Their previous 2025 survey revealed that AI coding tools slowed down expert coders by 20%, as the amount of time spent correcting the AI was more than the time saved using AI. However, because too few coders were willing to work without generative AI assistants, the 2026 survey was deemed “unreliable.” Why? As Anthropic found, it isn’t like these coders are making considerable productivity gains by using AI. Well, if much of the industry has experienced AI deskilling, this would explain why. Many of these coders could now lack the skills to code without AI help and so are unwilling to do so.
There is also a secondary issue here. Roughly 84% of coders use generative AI coding assistants. That means surveys like METR’s might struggle to find a control group of coders who aren’t AI-deskilled and capable of coding well independently. In other words, this industry-wide deskilling issue could skew their future studies.
What does all of this mean? Well, without serious restrictions in place, AI is a skill and expertise bomb. This would be a problem if it were being deployed this way in a few small, isolated cases. But AI is being slapdash-deployed across whole sectors and industries, and, as such, it threatens to erode our collective skills and abilities. This makes it a major problem that will affect us all. How we solve this problem is a conversation for another day. AI regulations, expanding workers’ rights, and corporate restructuring are just a few of the possible options. But for now, I am just grateful that this issue is hitting the headlines — even though it deserves far, far more coverage.
Thanks for reading! Everything expressed in this article is my opinion, and should not be taken as financial advice or accusations. Don’t forget to check out my YouTubechannel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!


I can't imagine letting an AI write all of my code and then just going, "Yep, it seems to work", signing off on it, and shipping it. How many gremlins are hidden in there? Is it efficient at all? How am I going to maintain it if I had to have AI write all of it in the first place and I have no idea how it works? People don't seem to understand that "coding" isn't so much the actual act of writing code; it's more about designing the program and the code, maintaining it, and being accountable for it.
I'm still hearing AI finance bros talk about how AI is improving and saying, "It can now code for 24 hours straight by itself!" That's misleading:
- The implication is that when it gets to 40 it will replace a worker. If you believe that you have no idea how coding works.
- If the AI tools are becoming more efficient - as they will need to become profitable - then an increase in "time spent coding" means they're actually becoming slower. You care about the output here, not the labor.
- With programming, less is more. You generally want to write the fewest number of lines possible to make something. If someone dumped a mountain of code on me and told me it took a computer 24 hours to generate it I would be scared. If they told me it took 40 hours, I'd be terrified.