The Lie At The Heart Of The "AI Revolution"
Is AI the new nuclear age?
The speed at which we are developing AI is astonishingly rapid, outstripping any previous paradigm-shifting technologies, like nuclear technology. Even beloved Hank Green has agreed with this statement. This lightspeed development has been used to push an AI economic arms race narrative, spreading fear that, if the West does not lead the AI race, its economy will be crushed, meaning that we need to pour our resources into this one sector to stay ahead and avoid death. In this way, the AI boom directly mirrors the nuclear arms race between the West and the Soviets. However, there is just one problem with this narrative: AI is not developing, nor will it ever develop anywhere near as fast as nuclear technology did. This core framework used to justify the AI boom is based on a flat-out lie. Let me explain the reality and the impact of this falsehood.
Nuclear Technology
Nuclear technology developed at a truly astonishing pace. This was partially fuelled by World War II and the Cold War, but also because the technology’s immense utility was felt straight away, which caused more effort to be put towards its improvement. So, let’s have a brief fly-by tour of the history of nuclear technology.
In 1938, chemists Otto Hahn and Fritz Strassmann discovered nuclear fission.
In 1942, Enrico Fermi, of Fermi paradox fame, achieved the first controlled nuclear fission chain reaction.
In 1945, Oppenheimer’s Manhattan Project (you might have heard of it) detonated the first nuclear bomb, and a month later, the first nuclear bomb was deployed in anger against the Japanese. The project cost $2 billion in 1945 dollars, or $36 billion in 2025 dollars, with more than 90% of the costs coming from nuclear infrastructure build-out to establish a nuclear industry in the US.
In 1951, the first energy was produced via fission from the Experimental Breeder Reactor I in the US. This reactor was mainly used to produce plutonium for nuclear bombs, so it was never connected to the grid.
In 1952, the first hydrogen bomb, Ivy Mike, was successfully detonated.
In 1954, the first hydrogen bomb was delivered to the US stockpile as part of Operation Castle, which cost roughly $30 billion in 2025 dollars.
In 1954, the first nuclear-powered submarine, the USS Nautilus, was launched. It used a miniaturised nuclear reactor to power itself, enabling it to stay submerged for months at a time and making it one of the first nuclear-powered ‘things’ ever built.
In 1956, the first full-scale nuclear power plant to deliver power to the grid was deployed in the UK. Calder Hall cost £35 million to build, equivalent to £765 million in today’s money.
It took just three years for us to go from controlling fission to using it to create a deployable bomb. Likewise, it took just nine years to go from controlling fission to building a commercial, full-scale nuclear power plant. All in all, it took 18 years to go from discovering nuclear fission to creating a mature nuclear industry that fundamentally changed the face of global politics and economics. No technology in the history of humankind has had such a rapid development or as sudden and profound an impact.
Yet, arguably, the major projects which defined this monumental leap forward and created this nuclear industry, namely the Manhattan Project, Project Castle, and Calder Hall, all collectively cost less than $100 billion in today’s money.
A Brookings Institution study found that the US had spent $5 trillion, or $10.6 trillion in today’s money, on its nuclear development, which included both bombs and power plants to supply nuclear material for nuclear weapons, from 1945 to 1995. A substantial 20% of that expenditure happened between 1945 and 1960, during which much of the nuclear infrastructure and industry was built. In other words, during peak nuclear development spending, the US was shelling out the equivalent of $141 billion (in 2025 dollars) per year.
This might all sound like a lot, but it had immense utility. The West’s nuclear stockpile enabled a new-age colonialism that was equally as lucrative as it was cruel. The US and other nuclear-armed nations witnessed a noticeable return on nuclear expenditure. Likewise, nuclear power became a wildly profitable and economically significant sector by the 1980s and remains so to this day. As such, investing in nuclear technology was arguably the ‘best’ investment the West has ever made, given these immense returns.
AI Technology
Okay, so what about AI? It sure feels like it has developed more quickly than nuclear power. After all, most of us thought it was just a sci-fi technology until 2022. Well, when do you believe the first AI was invented? It was a lot earlier than you might think.
In 1952, Arthur Samuel developed a checkers-playing program that could learn from experience and improve its performance, making it arguably the first example of applied machine learning, also known as AI. That is right — AI is old enough to have retired years ago.
In 1957, Frank Rosenblatt developed the Perceptron, the first to use simulated neural networks in AI and the first to analyse data and find patterns within it. This is a core component of modern AI technology.
In 1964, the first crude AI chatbot, ELIZA, was created.
In 1965, Dendral was created. This AI used “symbolic AI” to identify unknown organic molecules and is one of the earliest examples of AI helping to advance science. The technology Dendral pioneered is still highly important, as symbolic AI techniques have recently been incorporated into LLMs like ChatGPT-5 to reduce their “hallucinations”.
In 1985, Geoffrey “Godfather of AI” Hinton created the Boltzmann machine, the first AI capable of unsupervised machine learning. This developed the foundations of the modern neural network technology used by current AI systems.
In 2001, IBM demonstrated an AI stock trader that could outperform humans.
In 2003, neural probabilistic language models were created. These AI language models were what enabled predictive text. Modern LLMs are heavily based on this technology.
In 2017, Google invented transformers. This effectively optimised the neural network architecture laid out by neural probabilistic language models. All modern generative AI models use this architecture.
In 2018, OpenAI developed ChatGPT, the first transformer-based LLM chatbot, but didn’t release it to the public.
In 2020, AI researchers began to discover the “efficient compute frontier”, which heavily suggested AI development had diminishing returns, meaning there was a hard limit to its development (read more here).
In 2022, OpenAI released ChatGPT-1 to the public to mixed reviews, though many saw it as substantially better than previous AI chatbots.
In 2023, OpenAI released ChatGPT-4, which was only marginally better than its predecessor, despite being a far, far larger and more expensive model, suggesting the efficient compute frontier was correct.
In 2024, US-based Big Tech spent over $240 billion developing AI and AI infrastructure, a major escalation in expenditure that was expected to power a huge leap forward.
In 2025, OpenAI released ChatGPT-5, which many users and studies found was actually worse than its predecessor, confirming the limitations of the efficient compute frontier for many. Indeed, most AI projects have witnessed their improvements grind to a halt since 2024.
In 2025, MIT found that 95% of AI pilots failed to deliver measurable benefits; Harvard Business Review found that all AI operators are running at gargantuan annual losses; and cases of AI-related psychosis seem to be on the rise.
AI vs Nuclear Technology
AI doesn’t have the same singular genesis point as nuclear technology did. But, no matter where you define the ‘beginning’ of AI, or when it became ‘mature’, its development is always significantly slower than nuclear technology.
For example, it took AI 44 years to go from an experimental analytical tool to a successful stock trader.
It took AI 33 years to go from machine learning basics to ‘usable’ chatbots.
It took 15 years to go from AI predictive text to modern chatbots.
Nuclear technology went from being discovered to killing hundreds of thousands and changing the entire global political landscape in just seven years. No matter how you measure AI development, it simply has not had anywhere near that large of an impact. AI transformers were invented eight years ago, and the technology has not exactly made a big splash. In fact, they are still demonstrably not a productive or profitable tool to use, nor a profitable tool to build.
Ultimately, AI has undergone very little fundamental development since 2017. The more recent generative AI models use almost exactly the same architecture and technology as Google did in 2017; they just have far more data, computing power, and refinement behind them. This is why AI improvements are completely stagnant, despite exponentially more cash and power being shoved into them. It’s just more meat being pushed into the same inefficient and inherently limited sausage machine.
And, to make matters worse, AI development is also significantly more expensive than nuclear technology ever was.
In 2024, Big Tech spent over $240 billion on AI development. That is over $100 billion more per year than the US’s nuclear expenditure at its most feverish peak, with inflation taken into account.
And that is just US Big Tech. The entire AI sector is expected to spend $1.5 trillion on AI development in 2025 and $2 trillion in 2026. Still, no generative AI company even has a viable path to a profitable business model, let alone actually reaching profitability or ever achieving a measurable benefit to the nation or economy that uses these tools.
So, AI is developing at at least half the speed of nuclear power, or even at a tenth of the speed, depending on where you place the ‘start’ and ‘end’ of each technology’s progress. Yet, it costs at least 2.4 times more per year to develop than nuclear technology ever did, and unlike nuclear power, the utility and profitability of the technology are also pretty much nonexistent.
The AI bubble, like the NFT bubble, the crypto bubble, and the dot-com bubble before it, is draped in obscuring lies and propaganda in the form of PR-spun jargon, widely accepted falsehoods, and deceptive narratives. When the basic facts about a technology and the narrative framework we use to understand it are so warped, we lose our capacity to think critically about it. The myth overtakes reality, leaving even the well-informed among us vulnerable. History has made it painfully clear that it isn’t the ones spreading these myths that will have to pick up the pieces when it all comes tumbling down. With all of this in mind, is it any wonder that this bubble has gotten so out of hand?
**While researching this article, I found several brilliant people highlighting this same problem. The best I found was this video by Internet of Bugs — it’s excellent, and I highly recommend the video and the channel.**
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: IBM, MPI, National WW2 Museum, The Guardian, SIO, DoE, Fortune, HBR, AI News, Pex, NBS, AHF, Ijert, SSRN, Latent View, Nuclear Newswire, DoD, Futurism, SSON



This is all obscured by the fact that Google was steadily enshittified over the first 20 years of this century, making generative AI look like a huge leap. If it had improved incrementally, no one would have seen it that way
> In 1964, the first crude AI chatbot, ELIZA, was created.
Just my opinion, but I would rather say that ELIZA gave the illusion of being AI.