AI Is Eating Its Own Tail And Biting The Hand That Feeds It
The AI information economy Is screwed.

I have called the AI boom a death cult before because it is in so many different ways. From its total lack of financial sustainability to its horrific environmental impact to the veritable psychopathic psychosis of the tech bros pushing this technology onto us, every decision made is an almost fetishistic attempt to dominate us and prove to Daddy Shareholder that they are still the golden child. But the elephant in the room is the other way AI is spiralling towards destruction. You see, AI is choking the data economy to death, which is the foundation that the generative AI industry depends on, in two distinct ways.
The “Biting The Hand” Problem
Let’s start with what I’m calling the “biting the hand” problem.
It’s not exactly a secret that AI needs to be constantly fed a metric f**k ton of training data in order to stay up-to-date, which has meant tech companies have been compelled to scrape the internet to source enough. After all, copyright isn’t a thing… right? It’s also no secret that, to justify the insane cost of AI, tech companies have been shoving it down our throats and trying to get us to access services or information through these bots.
For a while, most people seemed to think these problems only materially affected other people, such as professional authors or writers. But, as it turns out, this is a problem that affects us all.
A recent report from Chartbeat and Axios investigated how Google’s AI search summaries were affecting online publishers of all sizes. To no one’s surprise, it turns out they are being utterly mullered.
Small publishers, such as sites with 1,000 to 10,000 daily views, have experienced a 60% decline in traffic from Google. Medium publishers, such as sites with 10,000 to 100,000 daily visits, saw a 47% drop, and large publishers, with more than 100,000 daily visits, saw a 22% drop! Another study found that online publishers saw an 80% decline in traffic from AI summaries.
Pew has the answer to what is causing this decline. They found that users are far less likely to click on links when Google’s AI summary appears in the results. In fact, users are twice as likely to click on links when this feature isn’t active!
Why is that bad? Well, even after this monumental downturn, Google Search remains the dominant source of traffic for these publishers.
Just to remind you, the term “publishers” doesn’t just include news websites. It also refers to recipe websites, how-to websites, blogs, and independent voices — basically, anything you read online.
Now, here’s the thing: these publishers need traffic to generate the income that sustains them. Less traffic means less money. Less money means these sites produce less content. That means that these AIs will have less valuable data to be trained on.
Can you see the problem here? AI is biting the hand that feeds it.
Apparently, using AI to replace human connections crushes the human output AI depends upon. Who would have thought?
Because the smaller publications, which tend to be more diverse and independent, are being affected the most, this will impact our media landscape and these AIs directly. It will flatten the curve and make both the written media we consume and the AIs trained on it more narrow and generic by squashing the more niche, diverse, and unique voices. I can’t stress enough that this is bad for both AI and these publications.
However, in an effort to cut costs, maximise shareholder value, or brute force algorithms, a sizeable portion of this written data is AI-generated anyway. In fact, this squashing of small and medium-sized publications is likely going to make AI-generated online content more prevalent. This brings up the second major problem with the AI data economy.
The “Eating Its Own Tail” Problem
The only thing generative AI does is find and replicate statistical trends in giant datasets. But even the best AIs aren’t perfect. After all, a trend can exist statistically but not technically be real. We, humans, have this problem too. Have you ever seen a face in something that doesn’t have one?
This means that weird things begin to happen when you train an AI on AI-generated dataYou see, AI-generated content contains tiny, almost indistinguishable trends that human-generated content doesn’t have. That is why we feel we can spot when AI has written something. If you then start feeding this data back into the AI model, it will place more and more weight on these non-human trends. At first, this looks like the AI becoming less capable, but if you continue feeding the AI its own output, it will eventually place more weight on these generated trends rather than the human ones, causing the model to collapse and produce gibberish. We call this “model collapse,” and it is a surprisingly well-studied phenomenon.
Why does this matter? Well, let’s quickly run through some stats.
According to Axios, by the middle of 2025, over half of the content being posted online was AI-generated.
Online scraping accounted for at least 82% of ChatGPT-3’s training data. We do not know what proportion of the current AI models’ training data was scraped from the web. However, AI companies have claimed they are “running out of data” and have had to resort to scraping low-quality data from the web to fill the gap. So we can safely assume this proportion is at least as high, if not higher.
Ready for another doozy? AI detection tools are terrible! They can detect AI content 57% to 95% of the time. But their false positive rate, where these tools label human-written content as AI, is considerable, with one study even finding a 50% false positive rate for a leading tool. Even worse, the false-positive rate of these tools is biased against diversity, labelling writers who are neurodiverse and who speak English as a second language as AI more frequently.
Put simply, AI companies are using considerable amounts of web-scraped data to train their AI. But more than half of the content posted online is AI-generated, and AI detection tools fail to consistently filter out AI content and incorrectly filter out human-made content. This means that today’s AIs are being trained on their own output, or even the output of their predecessor models (which also causes model collapse). This isn’t a hypothetical problem; researchers have found a very real, tangible risk that current AI models are headed toward model collapse.
There is a proposed solution to this problem: synthetic data. This is AI-generated data designed specifically for training the AI. However, this training method falls short, as it doesn’t improve the models much, exaggerates their flaws, harms real-world performance, and risks model collapse. As such, synthetic data is more of a gimmick than a replacement for genuine, high-quality human-derived training data.
This really is a lesson in not shitting where you eat.
AI companies have polluted the web with a tsunami of AI-generated content. Naturally, the very data these AI models depend on is being contaminated beyond recognition, which contaminates the AI themselves and sets off a vicious cycle. Not only that, but the sheer volume of AI-generated slop online is drowning out interesting, diverse and valuable human voices, reducing the internet to a bland monoculture. Of course this is horrific news for us humans, but it ain’t good for these AIs either, as it means there is most likely less “high-quality” human data being posted online than before.
It’s almost like it was a bad idea to unleash an unregulated plagiarism machine that is, at best, a hollow and deeply flawed attempt to mimic humanity onto the world…
Summary
In layman’s terms, the AI information economy is detrimental to everyone, including AI companies. It is crushing and robbing blind the very people producing the data it depends on and is flooding the internet with so much slop it is at risk of destabilising itself. This is a totally unsustainable situation. Can it be solved? Yes, regulation, copyright laws and copyright reform could all help. But these solutions involve taking power away from Big Tech, which is becoming harder by the day. It seems we are all trapped in yet another downward spiral. The question is, will we take the steps to escape before it is too late?
Thanks for reading! Everything expressed in this article is my opinion, and should not be taken as financial advice or accusations. Don’t forget to check out my YouTubechannel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!


Will Instead of eating its own tail you may want to refer to the human centipede. Or refer to coprophilia.. maybe call it digital coprophilia?
I have never cared for the AI summaries but you’ve given me even more reason to just ignore them along with sponsored search results. Until the web is better regulated, we’ll each have to filter out the search results that degrade the benefits of using search engines in the first place.