The Real Value Of AI
It isn't productivity

With it all kicking off among Anthropic, the Pentagon, and OpenAI, I think it is about time I explained what the real value of AI is, as this is the perfect lens for understanding what the hell is actually going on with AI right now. You see, AI is a Trojan horse. It is an exploitative oligarchy parading as a productivity tool. So stick with me while we fall down this terrible rabbit hole.
Let’s start with the fact that AI doesn’t broadly improve productivity, isn’t good enough for automation (read more here), and has been repeatedly shown to damage workers’ skills (read more here). Indeed, despite large-scale AI adoption, there hasn’t been a surge in productivity.
But that doesn’t make any sense. We have been told that the reason AI is so valuable and why corporations are pouring billions of dollars into AI is that it will be the next industrial revolution, unleash new levels of automation, boost productivity and deliver an economic miracle. However, we can see that this simply isn’t true. So, what is the real reason AI is perceived as so valuable? What actual beneficial utility do they actually deliver to their owners, investors and users?
Well, if you look at what AI is actually good at, it becomes quite obvious. It is enabling and empowering a new technocratic oligarchy. That’s right, your favourite chatbot is a perfect tool of authoritarianism masquerading as a slightly more polished version of Microsoft’s Clippy.
How? Well, let’s start with this spat Anthropic has had with the Pentagon because the story is not as simple as it has been portrayed.
The AI Panopticon
The Panopticon was a conceptually and ethically flawed idea proposed by Jeremy Bentham in the 18th century to make prisons way more efficient through perceived surveillance and self-policing. A panopticon has two parts: a central observation tower and a ring of cells facing it. The observation tower and cells are set up, so guards can see every bit of the cells and the inmates, but inmates can’t see the guards. This way, the guards won’t know if they are being watched, creating a feeling of constant surveillance. Bentham hypothesised that this would make inmates self-police their own behaviour and, in turn, allow a single guard to keep the prison in check. In short, the watcher sees everything, the watched see nothing, and so the watched behave themselves.
I will give you a moment to let your subconscious figure out why this is a truly awful idea. But the way modern generative AI currently functions at the corporate and governmental levels effectively serves as a digital version of the panopticon, as it is perfect for unseen surveillance.
LLMs like ChatGPT and Claude deeply erode digital privacy. They can accurately identify locations from a single simple photo, effectively geotagging any video or photo ever taken, making them the perfect tool for stalking. Both ChatGPT and Claude have remarkably accurate facial recognition abilities, and while there are safeguards in place to stop this ability from being used for surveillance, these can be painfully easy to overcome. Considering we live in a world where our phones, cars, doorbells and laptops have cameras strapped to them, it is easy to see how these AIs can be used to surveil everybody.
Indeed, Palantir, which functions as a private government surveillance service masquerading as a data analytics company, has its own AI that does just this, and can recognise and track faces and locations. We also know that they use external LLM AIs to integrate, analyse and visualise massive, disparate datasets for defence and intelligence agencies, enabling them to make rapid decisions. In other words, these AIs are used to connect every part of your digital footprint and hand it to the authorities. We also know that Palantir uses both Claude and ChatGPT to do this. In our digital world, that is effectively 24/7 surveillance, and it is already being used against immigrants, as ICE uses all of these services.
But corporations can do the same thing to their employees with LLM chatbots like Claude and ChatGPT.
These bots can be trained on the vast amount of internal data and communications to find anything from ‘poor performers’ to identifying and squashing unionisation efforts. Indeed, many Bossware tools now use integrated AI LLMs to ‘interpret’ the tone and context of worker communications and behaviour for risk management, productivity assessment, and even behaviour prediction. Enterprise-level monitoring goes one step further. If an organisation pays for the enterprise version of an LLM (ChatGPT and Claude have this option), then the administrator can effectively see every interaction you have with the AI. So, by having employees work with these AIs, it enables them to be surveilled in every action they take at work. Burger King has somehow taken this one step further and integrated ChatGPT into their workers’ headsets to ‘help them with meal prep’ but also to monitor their behaviour. You could call this micromanagement, but it is just an AI panopticon.
This is why I see the Pentagon/Anthropic/OpenAI debacle as total bullshit. Don’t get me wrong, I’m glad Anthropic didn’t want their AIs to be used to make “Mass domestic surveillance” and “Fully autonomous weapons.” I’m also damn glad that the backlash to OpenAI jumping in to replace Anthropic has caused such a colossal backlash that ChatGPT uninstallations have skyrocketed by nearly 300%! But Anthropic is no better than OpenAI in the grand scheme of things. Let’s not forget that Claude was used in the recent strikes in Iran, and they are both part of our new-age AI panopticon, which enables corporations and governments to mass-surveil us at unprecedented levels. The sceptic in me thinks this was all a marketing ploy by Anthropic, and that I should be telling everyone that switching to Claude is not the ethical alternative they think it is. And as if to prove my point, while writing this, Anthropic is back in talks with the Pentagon…
So, what is so bad about all of this? If you have nothing to hide, why care? Well, because panopticons don’t fucking work!
Bentham assumed the threat of being constantly watched would create order, but he was wrong, as all it actually does is create undue anxiety, isolation, distrust and feelings of vulnerability. This makes sense when you think about it for even a second. But here is the thing: being anxious puts you in fight or flight, which can make you more aggressive and less orderly. So, not only is this ethically abhorrent, but it doesn’t lead to the behavioural changes it intended to create (see here and here)!
This anxiety is not unfounded. The panopticon famously has a “who watches the watchers?” problem. All discipline is concentrated in a select few, and their interpretation, sadly, observation and objective realities are two very different things, and the watched do not know how their actions will be perceived, no matter how well-intentioned they are. Taylor Lorenz has a great video on this, go watch it here.
As a side note, a panopticon only functions if it has a credible threat to those under it. So, there needs to be constant visible enforcement, otherwise those under it lose the fear of the watcher, and the pressure to conform disappears. So, it doesn’t necessarily reduce the amount of enforcement needed to control a population, and in fact might motivate additional, chaotic, indiscriminate ‘punishment’ to grow this necessary fear and anxiety. As such, I suspect ICE’s very public egregious acts and the current colossal number of corporate mass layoffs are, in part, to do just this: instil fear in the AI watcher to make the AI panopticon work.
But really, this is all a panopticon is good for, making those under it anxious and therefore easy to manipulate, while concentrating power in the watcher. All that has changed with this modern AI panopticon is that it places the oligarchs who own and operate these AI systems in the position of power, making them judge, jury, and executioner. Literally in some cases.
In a society where the capitalist elites are turning more to monopolistic extraction of power and wealth from the 99%, you can see why AI and its panopticon are seen as so valuable, and why AI’s inability to boost productivity, or even generate a profit, isn’t a problem. It is about manipulation and domination by an elite.
AI Elitism
But this is just the beginning of the rabbit hole because generative AI, particularly the modern LLM variety, inherently enables an entirely new form of pervasive elitism.
Elitism doesn’t mean what many people think it means. It is the belief that a select few ‘elites’ should have greater power than the rest. As such, elitism is more akin to diet-fascism, as it almost always has to turn to baselessly dehumanising others to create a false meritocracy *points at the Trump administration.*
AI enables this in a far more foundational way than we have seen before.
Remember when Grok suddenly started glazing Musk, and saying he was the best at absolutely everything? Even if it wasn’t Musk who made Grok do this, this shows just how much control those who own or operate AI have over their product. In other words, if they want their AI to make certain decisions, they can make it do just that.
But these LLMs are being used by governments, businesses, organisations and individuals to ‘automate’ tasks, jobs and decisions. While we know this likely doesn’t improve their productivity, it does place a significant amount of key decision-making power in the hands of those who control the AI. In other words, the entire structure of AI LLMs is inherently deeply elitist. In other words, by forcing data and decision-making through AI, it concentrates power in the elitist class.
This is why the revelations that AI LLMs cause burnout and significant cognitive decline are not a problem. It undermines and overwhelms the collective workforce’s decision-making and inherent power within the system, and hands it to the tech oligarchs and corporate wielders of AI.
Why is this bad? Well, elitism is a fundamentally broken ideology. Hubris syndrome means that the more power you wield, the worse decisions you make. The empathy gap and associated affluenza mean that the more wealthy and powerful you are, the less empathetic you become. So elitism creates a system led by those who don’t care about the majority and make awful decisions, *points to the Trump administration again.* In other words, elitism will always create rampant and wildly detrimental inequality.
Again, it is easy to see why tech billionaires and many C-suite executives see this as valuable, as it concentrates power in their hands. But in reality, it does much more than that.
Flattening Human Value
Remember how I said elitism is diet-fascism? Well, AI LLMs are inherently wildly dehumanising, which is a critical step in solidifying elitism, oligarchy and fascism.
LLMs are dehumanising in so many ways that it is challenging to count. It dehumanises those whose work it is trained on by failing to compensate them. It dehumanises workers through ‘micromanagement’ and crushing their creative value. Like a regular panopticon, our AI panopticon reduces our internal moral agency to fear-based conformity and paranoia. This not only reduces and devalues our perception of our own humanity, but legitimates violence against others, as our own understanding of morals is entirely fear-based, so dehumanising everyone else. AI isolates us by making us interact with it, rather than other humans, eroding our sense of society and reducing our empathy. Through AI surveillance and ‘automation’, it flattens the human experience and value to just numbers in the eyes of intelligence, governments, and corporations, inherently dehumanising those under their power. This flattening and distance dramatically reduce government and corporate social responsibility by deindividualisation, enabling wildly exploitative and violating actions against those not in the elitist class.
I could go on and on. But in short, should we really be surprised that a technology that uses maths to pull off a statistical parlour trick to badly emulate humans is dehumanising?!
Again, it is easy to see why certain powerful governments and corporations see immense value in AI’s rampant dehumanisation. It casts off any social accountability and makes those under its power less empathetic toward those around them, or even toward themselves. It basically enables the violation and exploitation of the masses to empower and enrich the elite that oligarchy and fascism need.
If I need to explain why that is bad, you need your head examined.
The AI Oligarchy
There are countless other ways AI empowers an oligarchy, such as being the perfect tool for techno-feudalism. But I feel I have made my point, and this article is already way too long.
We can now see why AI LLMs are the perfect tool for the modern oligarchy that needs power to retain and grow its assets. They can force it into everything under the guise of productivity, only for it to massively empower them.
That is why those backing AI don’t care that it isn’t increasing productivity because that isn’t the point. Likewise, they don’t care that the entire industry is increasingly far from profitability, and keeping their cash bonfire going is a small price to pay for solidifying and empowering their oligarchic domination, and turning the rest of us into modern-day serfs. That is why they have been happy to inflate the AI bubble to an economy-ruining size.
Summary
The current generative AI industry has nothing to do with productivity. It is about capital turning to authoritarianism to empower and enrich itself while protecting itself from democratic accountability. It is a perfect example of unregulated capitalism devolving into fascistic behaviours because ‘maximising capital’ almost always equates to crushing the human experience. This is why what is going on in AI is so closely linked to the imperialist crap going on right now. The current state of the AI industry is a symptom of our political landscape. So, fighting AI’s crushing march is also helping to fight against genocide, against needless world wars, for democracy, for empathy, for the human experience.
So, if you are leaving OpenAI because it was so desperate to lick Hegseth’s boots, maybe consider leaving generative AI for good.
Thanks for reading! Everything expressed in this article is my opinion, and should not be taken as financial advice or accusations. Don’t forget to check out my YouTubechannel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!


I think this article damages your credibility. I agree with your assessments of Tesla, so I am generally pro Will Lockett. However, I find the AI tools amazingly productive. I was able to rewrite a software program that another engineer took 1 year to write in just 2 days. I absolutely will not be leaving generative AI. It does need to be regulated properly and the current US government is composed of evil morons so this is a bad situation.