
Let me take you back a few weeks to just before Trump got his grubby little orange-tinged fingers into the American government and all hell broke loose. Back then, the truth and objective reality, while heavily eroded and constantly undermined, still meant something and still had influence. I mean, back then, even OpenAI CEO Sam Altman, the second worst tech-bro after Musk, had to shut down rumours that OpenAI was on the verge of creating a superintelligent AI (also known as AGI) capable of outperforming a human at any task. He could have easily let the rumour mill swirl or even fed it to boost his company’s value and eventually line his pockets. But he didn’t; he knew that the hype around OpenAI had to at least resemble reality to keep its astonishingly over-inflated value. However, all of that is now out the window; reality doesn’t matter anymore, and Altman is proclaiming that AGI is right around the corner. What is going on? Well, it’s a new age of unreality. Let me explain.
Altman’s AGI claims come from a recent blog post. In it, he claims that the “systems that start to point to AGI are coming into view,” how AGI will be available to everyone within a decade, and the monumental impact AGI will have. Reading it, you really get a sense of a Dr. Frankenstein figure, desperately trying to warn the world of their terrible paradigm-shifting creation.
At the same time, the BBC tested the latest AI chatbots, including the very best from OpenAI. They fed them their news articles and asked the AIs to summarise them. This relatively simple task seems to be one of the primary uses of these AIs. Unfortunately, they found that every chatbot was utterly useless at this task. Their summaries were riddled with “significant inaccuracies” and wild distortions.
These findings from the BBC aren’t an anomaly. I have covered many similar studies that have found that AI is far too inaccurate to be used without significant human oversight. Not only that, but these studies also detail how we currently don’t have a viable solution to solving this problem (with more on that in a minute).
So, how do we reconcile these two ideologies? How is OpenAI unable to make a chatbot that can summarise the news well, yet somehow has started to develop the systems for AGI? The gaping void between these two statements is insurmountable.
Simply put, there is no reconciling. Altman’s claims are so far removed from the real world that they come across as terrible satire to anyone with even a vague knowledge of how AI works.
The AIs we have today can never develop into an AGI. I have covered this topic a lot over the past year, so if you want to know the details of why this is true, check out my articles here, here, here, and here. But, in short, the AIs we currently have are just statistical models that use maths to try and replicate patterns. What’s more, we are already butting up against the limits of what these models can do. We can make them exponentially larger and more complex, but it won’t significantly improve their ability at all.
Even worse, they can only replicate patterns. There is no way these statistical models can develop even the semblance of intellect, intuition, or a framework of understanding. This is what enables us to learn, think, and function incredibly accurately, even in novel situations. We don’t yet know how our neurology achieves this; in fact, we aren’t even close to figuring it out. But one thing is crystal clear: to make an AGI, the technology needed is so fundamentally different from the AI technology we have today that it cannot be thought of as a derivative of or development of it. If we ever create it, it will be an entirely new branch of mathematics, programming, and computer thought.
As a result, there is no way that OpenAI has the systems needed to build an AGI. And there is simply no way such technology will exist in the next decade. This is not some novel take I have plucked out of thin air. A veritable army of computer scientists is now screaming this fact and calling bullshit on claims like Altman’s.
So, why is he doing it? And why is our media listening to him?
Simple: Altman knows OpenAI can’t survive in the real world. It doesn’t make a profit and has no route to ever doing so (read more here). So, he needs to operate as far away from reality as possible. And the Orange Man has given him that ability.
As Trump has taken us from the post-truth era to flat-out denying reality, Altman has seen an opportunity to widen the gulf between the reality of what AI can do and the perception of what it can do. I used to think this gulf was an AI stock market bubble. I no longer believe that is what is happening. It is a crisis of reality as our economy and industry attempt to shed reality and move wholly into a speculative, non-existent cyberspace. Leaving us behind.
Science shows us that AGI won’t happen for decades or possibly centuries. But Altman doesn’t care. He, along with every other billionaire tech-bro, is pushing the world into a new era of unreality in which the speculation of the existence of AGI is just as good, just as real, and just as valuable as AGI. In this new era, the reality of your labour, your value, your intellect, your suffering and your voice has to be discarded. They are connections to a world in which AGI doesn’t exist, so they must be shed to line the pockets of the technocrats.
We are entering a strange new world of AI-driven unreality. And it will cause real psychological horror on a scale humanity has never seen before.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: BBC, The Independent, Sam Altman’s Blog, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett