AI Is A Hard Drug
And we need to treat it as such.

Like many others, I have helped friends who were addicted to damaging drugs. The media rarely accurates reflects the true nature of addiction. Addicts are, more often than not, highly functioning and fit into society perfectly. You could walk past them on the street and not know anything was wrong. But their affliction is eating them from the inside out, destroying their capability, cognition, and well-being. Still, they feel they need these substances to survive and will do Olympic-level mental gymnastics to justify their consumption. “I need it to work harder,” “It keeps me relaxed,” and “I only use it when I need to” — -these aren’t excuses, but a desperate attempt to justify a detrimental band-aid, rather than address the underlying problem itself. Often, this is because the underlying problem is not in that person’s control. After going through this process a few times, you see that the same pattern of justification and denial of the real problem plays out again and again. What I did not expect to see was this exact same pattern in an AI study. Dr Rebecca Hinds and Dr Bob Sutton have collected first-hand experience from more than 100 executives, technologists, and researchers to create a “blueprint that can help drive AI success”, but it reads more like an addict desperately trying to acknowledge the damage while defending their use at an intervention. Reading this report made me realise that AI needs to be treated as a hard drug. Let me explain.
The ‘blueprint’ was published in a new report from the Work AI Institute, a research organisation run by Glean AI, a generative AI platform for businesses. Straight away, there is a conflict of interest here. The Work AI Institute, at the very least, looks like yet another industry “think tank”.
But despite this potential bias, research leader Doctor Hinds still admitted some devastating findings about AI in the workplace in this study and in her summary. She claimed that when workers use AI, “There’s often this illusion that you have more expertise, more skills than you actually do,” which is causing office workers to feel smarter and more productive, but whose core skills are being actively eroded. The report also found that if AI is used to replace human judgement, it can create hollow, alienating work.
The report concluded that AI can create either a cognitive dividend or a cognitive debt. Essentially, they found that when used as a partner alongside an expert to supplement their expertise, it can free up time and sharpen judgement, creating a cognitive dividend. However, when used as a shortcut, such as to automate a task, increase worker scope and reduce workforce size, it erodes workers’ abilities and fosters false confidence, creating a seriously damaging cognitive debt.
While I do agree with the cognitive debt analysis, the cognitive dividend analysis gives me flashbacks to “The Wolf of Wall Street”, humming and banging chests.


