Will Lockett's Newsletter

Will Lockett's Newsletter

Sam Altman Is Dangerously Brain-Dead

Was that a hint of dystopian despotic transhumanism?

Will Lockett's avatar
Will Lockett
Feb 27, 2026
∙ Paid
Photo by Cassi Josh on Unsplash

Something strange happens when grifters and schemers gain power. They get too comfortable and either forget to keep up their machinations or no longer believe they are necessary. Cracks begin to show in their ruse, allowing us to see behind the mask and witness their true intentions. Well, it seems tech bros are now firmly rooted in this phase of their Machiavellian arc, as Sam Altman has just perfectly demonstrated. Altman is known for consistently spewing some of the most brain-dead AI propaganda out there, but he recently turned this up to eleven and exposed his utterly rotten core ideology. The question is, can we collectively recognise this giant red flag and do something about it?

What am I talking about? Well, during a recent interview, Altman complained like a petulant child about the “unfair” comparisons made between AI and human efficiencies. He said, “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

Now, on face value, that might seem like an innocent analogy to reframe the current heinous state of AI. But here is the thing: not only is his comparison here totally invalid, but it is also widely dehumanising because it ignores the value of human intelligence and heavily implies Altman’s broader philosophy is deeply anti-human and insanely dictatorial.

Invalid Comparison

Altman’s comparison is deeply misleading. It implies that an AI and a human are equally capable, that AI is a more energy-efficient learner, and, broadly, that AI is more efficient per task. Because none of those statements are true, the basis for the comparison — and the justification for his silly AI — is totally false. But even if this were true, this comparison still devalues the human experience, human rights, and human life, which renders it invalid. We will cross that ethical bridge in a minute, but for now, let’s sort out the technical issues.

Let’s start with the fact that humans are still way more capable than AI. I could pull up a variety of studies to prove my point here, but the most apt is this one by the Center for AI Safety (CAIS) and Scale AI. This study aimed to figure out how useful AI was in the real world, so they gave six leading AI models proper freelance work and measured their rate of success. They discovered that even the very best models could only complete 2.5% of the jobs. That is a 97.5% failure rate!

For Altman’s comparison to even be remotely valid, human workers and AI need to have comparable capabilities. But in the real world, AI is still light-years away from being even marginally comparable to humans.

And this gap isn’t going to close any time soon. A recent large-scale survey of AI researchers found that 76% of them believe that current AI technology cannot be scaled up to Artificial General Intelligence (AGI). In other words, for AI to have comparable cognitive abilities to a human would require totally new approaches and technology, not just scaling up the current tech, as the AI industry is doing today. Altman should know this, given that a recent OpenAI research paperfound that increasing computing power or training data can’t decrease “AI hallucinations” and that they have no viable method to do so. Because hallucinations are just errors in the AI and are the primary cause of the substantial failure rates found by the likes of CAIS, this essentially means that shoving more resources into the AI sausage machine, as Altman is trying to do, won’t make them any better, let alone get close to AGI.

But, for argument’s sake, let’s agree that AI is just as capable as humans. Well, we are still more efficient on a per-task basis and at learning. In fact, we are astonishingly more efficient!

The human brain runs on just 12 watts of power, which is less than most LED lightbulbs. This means that over the 20-year period it takes a human to grow up and ‘get smart’, it consumes roughly 2,102 kWh of energy. For some sense of scale, that is slightly more energy than an average EV uses in a year (roughly 8,000 miles per year at 3.8 miles per kWh).

The older, much smaller, and less capable ChatGPT-3 consumed an estimated 1287 MWh during training. This means a human brain uses just 0.16% of the energy to “grow up” as it takes to train an outdated AI. To put that into perspective, the energy used to train ChatGPT-3 is enough to drive our EV nearly five million miles, and it would take 612 years of average use to cover that distance. Here is the kicker, though: ChatGPT-5 is estimated to consume exponentially more energy during training, given that it is a substantially larger model.

User's avatar

Continue reading this post for free, courtesy of Will Lockett.

Or purchase a paid subscription.
© 2026 Will Lockett · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture