Amazon Just Proved AI Ain't The Answer YET AGAIN
How long will it take for them to learn this basic lesson?

Nvidia CEO and professional AI glazer Jensen Huang recently claimed that we have already achieved AGI (Artificial General Intelligence). Firstly, that raises serious concerns about his definition of intelligence. Current AI systems are more akin to a deeply hallucinating, plagiaristic sycophant than to any form of coherent intelligence. The toothless, tin-hat-wearing cider-addled man propping up my local pub from 11:00 AM every morning has infinitely more intelligence than these “flatten-the-curve” statistical slop machines. That guy is also infinitely more fun to talk to. But secondly, that simply ain’t happening, Chief! And Jensen would know that if he took a break from counting the billions of dollars he has earned in circular financing and actually looked at generative AI’s capabilities in the real world. You know, where intelligence isn’t some pseudointellectual, speculative bullshit concept but instead critical to real-world results. Take Amazon, for example. For the third time, they have learned the painful lesson that generative AI is not intelligent, can’t replace human intelligence, and isn’t a productivity tool. Well, I say “learned” — what is that fake Einstein quote about the definition of insanity? Something about doing the same thing over and over again, expecting different results?
The Recent “Lesson”
Earlier this month, the Financial Times reported that Bezos’s favourite little monopoly had effectively called a giant emergency meeting of its remaining engineers to try and fix the rapidly increasing number of outages taking Amazon.com down. These aren’t little blips either. A week before this meeting was called, Amazon’s main shopping website was down for six hours! This one outage could have cost Amazon over $490 million in sales, given that $717 billion was spent on Amazon.com in 2025. Let’s just say that the bald man with more in common with Smaug than the rest of humanity wasn’t too happy about that. This meeting was an all-hands-on-deck moment. The engineers were expected to find the source of the problem and fix it.
And guess what the problem was?
Amazon’s own AI…
According to the official line, generative AI was a “contributing factor” in the botched “software code development” that caused these outages. But that is a bit like saying the untimely death of Archduke Franz Ferdinand was a contributing factor to World War I. This reeks of a PR spin designed to hide the embarrassment of the own goal that is Amazon’s AI “transition”, particularly when you consider the actual problems causing these outages, the engineers’ solutions to prevent them, and the wider context of Amazon’s recent business decisions. It all points to AI being the culprit.
Take the 13-hour AWS outage incident from December of last year. Last month, the Financial Times reported that Amazon’s own “agentic” Kiro AI coding tool was to blame. Engineers had allowed Kiro to make changes to Amazon’s AWS code and make “autonomous decisions”. As it turns out, Kiro ain’t that clever; it pulled a Musk move and deleted the entire working code environment before recreating it from the ground up with a ton of fatal bugs. In fact, the FTfound that Kiro caused outages like this not once, but twice!
Indeed, it seems to be both wild “agentic” AI and AI slop coding that are the culprits behind Amazon’s outages, and the smoking gun is the emergency solution these engineers came up with. Are you ready? Their solution is to require junior and mid-level engineers to ask senior engineers to sign off on any AI-assisted changes. This is almost fully admitting that AI caused all these outages.
But why are these engineers using AI like this? After all, 96% of professional coders explicitly don’t trust AI-generated code. These guys know giving it the keys to the kingdom was a bad idea.
Well, they were basically forced to.
Amazon has laid off thousands of engineers and plans to soon lay off around 30,000 workers, all while their major services, like AWS, expand dramatically. These services simply can’t be run on a skeleton crew, which makes this an obvious attempt to replace workers with AI automation. Indeed, last year, while these layoffs were happening, leaked documents showed Amazon’s plans to replace 75% of its workforce with automation and AI.
In short, these engineers are likely so stretched that they are forced to turn to AI to speed up their output. On top of that, Amazon recently mandated that 80% of its engineers use Kiro at least once a week. This isn’t necessarily a problem, but because they are so stretched, they don’t have the time to check the AI’s outputs, which practically guarantees these fatal mistakes will happen over and over again.
In other words, AI, despite its name, isn’t actually intelligent and is no replacement for genuine human intelligence in the real world. (I hope you are taking notes, Huang.)
But once again, Amazon has also proved AI isn’t a productivity tool either.
It completely kills productivity for these engineers to ask junior and mid-level engineers to obtain senior engineers’ approval for AI-assisted changes. Amazon engineers are expected to use AI coding tools like Kiro. So this means almost every line of code now has to be reviewed and approved by a senior engineer. Being a jumped-up debugger is not part of a senior engineer’s job description! This is a huge bottleneck for junior and mid-level coders, who are already far too understaffed, and it burdens senior engineers with heavier workloads and scope bloat, which detracts from their main responsibility of ensuring the entire project actually functions on a wider scale. In other words, this AI was implemented to make these departments more productive, but that decision led to a steep and damaging decline in quality. So, Amazon’s solution is to make these teams far less productive from top to bottom through enforcedmicromanagement.
Once Bitten, Twice… Bitten?
In a previous article, I covered an eerily similar situation that took place at Amazon back in October 2025. AWS had totally crapped the bed and briefly took out half the internet. Nearly all of AWS was down for 16 hours straight due to a simple DNS resolution issue, which impacted thousands of businesses, including Medium and Substack. I can vividly remember being unable to log in to either of my accounts that day.
Why did it take so long to fix such a simple yet devastating issue?
A few months prior, Amazon had laid off a significant number of engineers at AWS whose jobs were specifically to resolve these kinds of problems. Officially, these layoffs had nothing to do with Amazon trying to replace workers with AI. But, again, this is not a task that can be completed by such a small skeleton crew, and this was when Amazon was beginning to enforce AI usage on its AWS engineers and deploying autonomous “agentic” coders. I cannot prove it, but it’s kind of obvious they attempted to replace these engineers with AI, and the AI was unable to fulfil the role due to its lack of intelligence, which caused this catastrophic outage.
You’d think such a public and humiliating failure would teach them a lesson, but here we are, just a few months later, and they have made the same mistake again!
Truth be told, they should have learned this lesson all the way back in early 2024.
I wrote about this hilarious failure in one of my previous articles. Do you remember Amazon’s “Just Walk Out” grocery stores? The idea was that facial-recognition cameras, shelf sensors, and AI would track which items a customer had taken and charge their Amazon account when they left, eliminating the need for a cashier or self-checkout. This innovation was hailed as one of the first cases of AI directly replacing human workers and a way to lower the cost of operating a store. But, in reality, it really wasn’t. A report found that over a thousand remote workers had to be hired to monitor the video feeds and verify 70% of the customers’ purchases, given that the AI was consistently making mistakes. This amount of labour isn’t cheap, even if it is outsourced overseas, and Amazon’s “Just Walk Out” AI became significantly more expensive than simply hiring regular cashier staff. As such, Amazon failed to sell the system to third parties, which resulted in the closure of almost all of these stores and fancy non-AI self-scan systems being used as a replacement. Again, the AI isn’t intelligent and can’t reliably perform simple tasks. This is because it is a statistical machine, meaning it will statistically get things wrong. So, the amount of human oversight needed to correct its simple but potentially devastating mistakes is almost always more work than is saved by implementing the AI.
What is the lesson to learn from this? These systems aren’t intelligent, they can’t even replace basic human intelligence, and they aren’t a productivity tool.
The real question is, after a third attempt, do you think Bezos and his band of ravenous executives have the awareness, empathy, or understanding to learn this lesson? That isn’t a leading question; I genuinely want you to answer it for yourself.
They Should Have Known…
You could argue that this is the clash between theory and practice. AI works in theory, in labs, and in controlled conditions, and the only thing Amazon is doing is ironing out the kinks of transitioning from theory to reality. I will happily point out that Amazon could easily test AI in the real world in a controlled and restricted way, rather than unleashing it basically untested and unrestrained on the bedrock of their business, all because some sweater-vest-wearing business consultant thinks it’s the easiest path to buying a third yacht. But here is the thing: generative AI doesn’t work in theory, and we have known that for a while.
Take the Carnegie Mellon University study, which found that even the best “agentic” AIs completely fail basic tasks 70% of the time, thanks to hallucinations and obviously incorrect responses. Or what about the recent study that found that the best current AIs failed 97.5% of realistic real-world freelancing jobs given to them due to AI hallucinations and total failures? What about the University of Waterloo’s research, which found that even the best generative AI coders only have a 75% accuracy rate when tasked with very basic coding tasks? In other words, even basic AI-generated code doesn’t work a quarter of the time! Or, what about research from Veracode, which found that 45% of AI-generated code contained security flaws? Or the study from Coderabbit, which found that AI-generated code has 70% more bugs than human-written code? All of these factors combined explain why a recent Harvard Business Review report discovered that AI is not boosting productivity, but instead intensifying work. Ultimately, AI is more like a burnout machine than a productivity tool. The time saved by using an AI is greatly overshadowed by the time spent micromanaging the little slop-producing plagiarism monster.
Possibly my favourite example was some research from the University of Melbourne. They found that AI only increases productivity in “low-skill” tasks, such as taking meeting notes or providing customer service. Here, they discovered that AI can help smooth the outputs of workers who may have poor language skills or are learning new tasks. For higher-skilled jobs where accuracy is essential, AIs make errors so frequently that the extensive human oversight required to catch them makes the entire effort less productive than not using AI at all. What’s the problem here? Well, the workers who stand to benefit the most from AI — such as “low-skill workers” — don’t possess the skills or awareness to oversee AI and identify and correct its frequent mistakes. So, even though it “improves productivity”, potentially critical errors go unnoticed, which creates an obligation to micromanage its output, meaning it doesn’t improve overall productivity.
And, yes! It does get worse.
You might argue against this point by claiming we are providing these AIs with more data and compute power, which is causing them to improve, meaning they could have already overcome all of these limitations.
Well, not so fast.
As I have covered before, OpenAI’s latest research paper found that “hallucinations” (where the AI gets things wrong) are a fundamental part of generative AI technology and aren’t going away any time soon. They mathematically proved that adding more training data, ensuring perfect training data, and providing the models with more compute power won’t lower their current hallucination rate. In fact, the paper concluded that there are no viable options for improving overall accuracy.
The body of research is very clear — generative AI is not intelligent; it is not reliable; it can’t replace humans; it can’t be widely used as a productivity tool; and this is how it will remain for a long time.
Amazon should have known this from the get-go, and that is exactly why those “in the know” are pointing and laughing at Huang’s AGI remarks.
Summary
So, well done, Jeff; you have fallen on your own multi-billion-dollar artificial sword. The central product of your empire is flickering out like a broken lightbulb, and you have fired all the talent that could fix it because you wanted to cosplay as Tony Stark and J.A.R.V.I.S. (not to mention that your AI has more in common with the senile Holly from Red Dwarf)… Still, I wonder if any of those employees will be willing to come back to clean up your mess after being so crudely tossed out into the cold. I wonder if, instead of chasing speculative value, techbros will learn their lesson and place more value on real, brilliant human intelligence. All I know is I can hope.
Thanks for reading! Everything expressed in this article is my opinion, and should not be taken as financial advice or accusations. Don’t forget to check out my YouTubechannel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!


I wonder what happens when the AI companies themselves replace all their staff with AI. I think we've already had a sample.
Just recently my company has bought several licenses of Claude. But me and my colleagues couldn't register. Claude didn't accept phone numbers. It turned out that problem was worldwide, lasted at least for a week. I'm not sure if it is fully fixed as of today, although we have access now.
The explanation from Anthropic was like: our product is so fantastic that we struggle to handle rapidly growing number of users. Well, indeed, the hype around Claude is at its peak now. But I think that real real reason is different. They have bugs in their registration system and no people to fix them.
So far, AI companies are rather good in providing positive user experience. No significant downtimes, fast responses, easy access. But, as the user base grows, and chase for profitability increases, it will only get worse. Enshittification comes there as well.
Let him fire the competent people and use ai instead. Let him. This story is delicious.