AI is taking over the world. These incredibly powerful computer programs can render numerous jobs obsolete and even automate tedious day-to-day tasks. It seems like the future is here, and if you believe the hype, hoards of people are flocking to adopt this technology, particularly generative AI like ChatGPT, as fast as possible to better their lives. But are they really? A recent study by the Reuters Institute suggests otherwise.
The Reuters Institute surveyed 12,000 people in six countries, including the UK. What they found goes entirely against the AI hype narrative, as almost all surveyed don’t use AI tools at all. For example, it was found that 20% to 30% of people in the countries surveyed had yet to hear of the top AI tools, such as ChatGPT, and only 28% had tried generative AI tools. Despite that, the vast majority of those who said they have tried generative AI said they only used it once or twice. This is reflected in the data, as in the UK, only 1% said they use AI tools on a daily basis. Meanwhile, daily use of generative AI was only reported by 1% of those surveyed in Japan, 2% in France and 7% in the US.
As such, the report’s lead author, Dr Richard Fletcher, told the BBC that “there was a ‘mismatch’ between the ‘hype’ around AI and the ‘public interest’ in it [AI].”
These results could point to several different things. Firstly, surveys aren’t always the best way to identify trends in a population, as people tend not to answer them correctly. So, this could be a misleading result. That being said, Reuters’ large sample size and simple non-motivated surveying techniques make this less likely. Secondly, it could point to the fact that much of the population has yet to actually try and develop their skills with generative AI. However, that doesn’t explain why roughly only 3% of those who tried AI went on to use it regularly. The third and final possibility is that these AI tools are far less capable and useful than the narratives circulating around suggest, and as such, people aren’t using them. Personally, I think this final interpretation is more likely, as my own experience, common sense, and several significant bodies of research support it.
I have no intention of ever using AI for these articles, but I have investigated using them for other projects. In my experience, ChatGPT and other chatbots are remarkably good at seeming to write like a human but are horrific at details and have no knowledge of recent events. As such, their writing is more akin to a useless abstract soliloquy than valuable, worthwhile writing. The same is true of AI image generators, although Adobe’s Firefly is, in my experience, by far the most useful. However, this is only true for visual professionals in hyper-specific situations, not the general public.
So, when would the average person use these generative AI tools? They wouldn’t use them socially, such as messaging friends or family, as these interactions are personal and often revolve around events the AI has never seen before and, therefore, can’t write about. These interactions are also about personal connections, so this automation makes little sense. The same can be said for images or videos we share socially. At work, most people also need to write or create visual content that revolves around current events, which the AI has not yet been trained on. Furthermore, the AI’s lack of detail and tendency to hallucinate false facts, for example, ChatGPT4 has repeatedly stated that Elon Musk died, require their outputs to be heavily supervised and edited to be of use in most professional situations. As such, in many cases, it is actually easier to write your own content at work.
This isn’t to say that generative AI is not being used. For example, around 49% of newsroom journalists use generative AI tools like ChatGPT. However, it isn’t being used to replace journalists, or to completely automate article writing, but rather as a tool to speed up the writing of summaries and bulletins, as well as helping with topic ideation, text correction, workflow efficiency and simple research. This is great for journalists, but almost every other profession does not need generative AI in this way.
This aligns with research I have covered before, namely from MIT and the Harvard Business Review, which both state that AI is too expansive, limited and inaccurate to replace workers or be widely used socially. Instead, they suggest AI should be used as specific restricted tools to augment and support workers, not replace them, as the AI hype claims is happening.
AI is going to change the world. However, the broader media and those pushing the AI hype seem to forget that this technology has limitations. What’s more, as I have covered before (read here), breaking past these limitations might be nearly impossible. As such, it seems the AI revolution will be far slower and more gradual and won’t go as far as some hope it will.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.