Will Lockett's Newsletter

Will Lockett's Newsletter

Claude Mythos Probably Isn't What You Think It Is

Dangerous AI? Marketing stunt? Or a protection racket?

Will Lockett's avatar
Will Lockett
Apr 23, 2026
∙ Paid
Photo by Sharon Waldron on Unsplash

Just over a week ago, Anthropic announced its Claude Mythos model to the world, and boy, did everyone lose their tiny little marbles over it. It’s easy to see why. Mythos is effectively a cybersecurity bot, able to crawl through code, find vulnerabilities, and exploit them. In just a few weeks, Anthropic claim Mythos discovered thousands of zero-day vulnerabilities in every operating system and web browser. Naturally, Anthropic is so worried about the ramifications of such a model being publicly available and other companies creating similarly dangerous models that they are currently only launching it as a ‘preview’ for researchers, government bodies, and major software developers so they can get ahead of the tidal wave of cyberattacks these models will cause. But it’s fair to say that, like all AI bros, Anthropic has a solid track record of being more marketing spin than trousers. So, is this the AI armada we were warned about? Or is this just a marketing stunt?

Why We Should Be Sceptical

It’s fair to say that the timing of Mythos’s launch is so coincidental that it raises some major suspicions.

A few weeks ago, Anthropic’s main product, Claude Code, was being publicly dragged. AMD’s director trashed the coding assistant for getting dumber and lazier. It simply couldn’t do tasks it once excelled at. This could be explained by a recent OpenAI memo, which details how Anthropic is compute-limited, or the recent report that Claude is experiencing regular, damaging disruptions. Quite simply, Anthropic might be dumbing down Claude to cope with the enormous influx of former OpenAI users. Then, because Anthropic has vibe-coded much of Claude’s scaffolding, its source code was leaked, allowing the entire world to see what made it tick! Ironically, for a company that can only exist by stealing more copyrighted works than nearly anyone else, except OpenAI, Anthropic claimed the code was their IP and issued approximately 8,000 copyright strikes to get the leaked code taken down. Not exactly a good look for Anthropic from an ethical, cybersecurity, or competency angle — especially when you consider they are planning an IPO by the end of the year.

So, the fact that they have suddenly developed an unimaginably powerful AI that is too dangerous for us to even try, let alone verify, is a little coincidental, don’t you think? It’s almost like this is a perfect marketing tool to bury all the major issues and bad press thrown at Anthropic over the past month. And, while this is in no way proof that Mythos is exclusively a swizz, it at the very least is a sign we need a dose of healthy scepticism here.

Remember: it isn’t just cranks on the internet like me who are urging scepticism. Some of the most influential AI researchers, industry experts, and even Big Tech investors have serious doubts about Mythos.

AI and neuroscientist Gary Marcus stated that Mythos’s risk was “overblown” and that “To a certain degree, I feel that we were played.” Yann LeCun, one of the ‘AI Godfathers’, totally dismissed Mythos, calling it “BS from self-delusion”, and said that other smaller models could already finish the same tasks. Dr Heidy Khlaaf, Chief AI Scientist at the AI Now Institute, declared that the announcement of Mythos used vague language and didn’t include the right metrics to verify the claims Anthropic made, leading her to question whether the announcement was really designed to generate investment without too much scrutiny. David Sacks, arguably one of the most egregious members of the PayPal mafia — who is, in my opinion, an utter wanker and, more importantly for our conversation, a Big Tech investor — maintained that Anthropic has a history of “scare tactics” to drive hype, meaning we should take their claims with a grain of salt. When one of these guys warns that tech claims are too overblown, you know it’s bad!

That is a chorus of deeply informed people across the political and ethical spectrum, all warning that things might not be as they seem with Mythos.

Not to mention that independent analysis has supported this perspective. AISI is one of the organisations that was allowed to preview Mythos and test its capabilities. Though they did consider it a step forward in terms of its ability to perform simulated cybersecurity attacks, they also found it wasn’t as significant a leap forward as Anthropic claimed.

But there is one glaring anecdotal hole in Anthropic’s narrative.

If Mythos is so outstanding, why didn’t they use it to stop the Claude leak? You’d think that the first thing you would do if your company were about to become the de facto experts in software vulnerability detection and a tidal wave of AI-powered cyberattacks were swiftly approaching would be to use said AI cybersecurity tool to make sure all your software was fully patched. It doesn’t quite make sense that Anthropic can have a model that is supposedly as powerful as Mythos and experience a giant breach at the same time. There are a few possible explanations here: Mythos might not be able to accomplish what they claim it can; Claude’s heavily vibe-coded nature could make it nearly impossible to solve bugs and vulnerabilities; or even that Anthropic doesn’t trust Mythos not to completely ruin Claude. But no matter what the truth is here, this discrepancy doesn’t look good.

Okay, so we need to be sceptical. We get it. What questions should we be asking?

User's avatar

Continue reading this post for free, courtesy of Will Lockett.

Or purchase a paid subscription.
© 2026 Will Lockett · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture