
AI will automate or assist almost every aspect of our lives in the coming years, making our jobs and even personal lives easier and more productive. It’s even meant to be the key to unlocking our next great economic boom and is being lauded by multiple governments as a way the West can counteract the East’s aggressive economic growth. Naturally, it’s no wonder the AI revolution is being so heavily pushed, right? Well, no. In fact, Intel just discovered that AI doesn’t actually improve productivity at all…
Though Intel is struggling in the AI infrastructure race against its competitors like Nvidia, it is still pushing AI software. Many of its hardware partners are launching AI PCs, which use AIs like Microsoft’s Copilot to automate tasks on the machine. As such, Intel has a vested interest in these AIs, as it could mean more people will buy their componentry.
Unfortunately for Intel, when analysing how much time AI PCs can save people, they didn’t find what they were looking for.
This began with a study that looked at how 6,000 people in Germany, France, and the United Kingdom used their PCs and found that people lose, on average, 15 hours a week on “digital chores” like writing emails, sorting calendars, transcribing meetings, managing files, and similar. This study identified that many of these tasks can be automated with PC AIs, potentially saving 4 of those 15 hours per week.
However, this was only in theory. This study didn’t look at how these AIs can do this, or even if they can do it well. As such, the reality of doing this in practice is very different.
Intel’s extended study, which tried to see if AI can save time and boost productivity, found that “current AI PC owners spend longer on tasks than their counterparts using traditional PCs.” According to the study, the users of these AIs spent a long time trying to identify “how best to communicate with AI tools to get the desired answers or response,” which is why they took longer. However, there is also a stark lack of data in the report on how much time was spent monitoring and correcting these AIs’ outputs, as this is an infamous flaw of AI, which we will come to in a minute. This omission, combined with Intel’s perplexingly optimistic summary of the study, which stated that people need to be better educated on using these AI tools, feels like their conflict of interest is massively clouding their judgement.
And it isn’t just Intel that found that AI simply doesn’t deliver on its promised productivity gains.
A recent study found that while 96% of executive managers say they expect AI to boost productivity dramatically, 77% of their employees say AI has increased their workload, and 39% reported spending more time moderating the AI than actually doing the task themselves. This is backed up by IGN, which found that AI can only increase productivity by 0.1% at most, and MIT, which found that even in an ideal situation, AI can only increase productivity by 0.5%.
There are also hard, real-world examples of how AI fails to increase productivity. Take Amazon’s Just Walk Out grocery stores. The idea was that facial-recognition cameras, shelf sensors, and AI would track what items a customer had taken, then charge their Amazon account once they left, negating any need for a cashier or self-checkout. This innovation was hailed as one of the first cases of AI directly replacing human workers and a way to lower the cost of operating a store. But, in reality, it wasn’t. A recent report found that over a thousand remote workers had to be hired to monitor the video feeds and verify 70% of customer purchases, as the AI consistently got it wrong. This amount of labour isn’t cheap, even if it is cheaply outsourced overseas, and Amazon’s Just Walk Out AI became significantly more expensive than simply hiring regular cashier staff. As such, Amazon has struggled to sell the system to third parties and has had to switch its own grocery stores to a fancy non-AI self-scan system instead.
Another example is the growing number of software engineers stating that AI coding assistants are functionally useless. These AIs can write small, specific blocks of code well, effectively replacing a quick search on GitHub and a copy-paste. However, getting these AIs to write more significant chunks of code is counterproductive, as they get things so profoundly wrong so frequently that it takes more time for the engineer to debug the code than to write it and debug it entirely themselves. Even Google admits that AI still requires more human time than traditional human programming.
AI can be an immensely impressive and valuable tool. But it is not a silver bullet for automation or improved productivity. These models are too limited, rely on questionably sourced data, cost too much to build, and get things wrong far too frequently to be able to manage that.
So, why are we being pushed a technology that even the companies pushing it know cannot deliver on its promises?
You could write a handful of PhDs or enough books to prop up a ladder on that topic. The current AI bubble is a product of our cultural, political, and economic situation. I cannot possibly summarise this in a single article. But, in short, the market doesn’t care about your efficiency, your production, or your quality of life. We live in a postmodern, data-driven economy, and AI is the best way to extract your data and extract your “value,” at least to the people selling you these solutions. So it doesn’t matter that these AIs don’t work as advertised and actually make life for the user harder; that isn’t their purpose. Instead, they exist only to perpetuate and deepen the questionable economic situation we find ourselves in.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Intel, The Register, Perforce, Will Lockett, Will Lockett, Will Lockett, Will Lockett