Even AI Companies Know Their Models Can't Be Trusted
It's all hype, no trousers.

You could fit every techbro CEO’s colossal, distended ego inside the gaping chasm between what AI promised to do and reality. Every now and then, we get a perfect snapshot of this discrepancy, and internet sleuths have just provided us with another one. Microslop has been shoving Copilot down users’ throats for a while now, heavily pushing it as the future of professional productivity. However, outlets like TechCrunch, TechRadar, PCMag, and others recently broke the news that Microsoft’s Terms of Service (ToS) state that Copilot is for entertainment purposes only, sparking a tsunami of online derision. Unfortunately, this is so much worse than people realise, because this isn’t just limited to Copilot, and the implications of this run significantly deeper than you might think.
The Terms of Service
To use Copilot, like most modern software, you have to agree to its Terms of Service (ToS). So, let’s start with what Copilot’s ToS actually says. In the “Code of Conduct” section, it states that “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.” In other words, Copilot isn’t accurate or reliable enough to be trusted with even remotely important tasks or as a source of information, meaning it should be treated more like a toy than a tool.
But Microsoft also doubled down on that “at your own risk” statement. In the “Important Disclosures And Warnings” section, it states that “You agree to indemnify us and hold us harmless (including our affiliates, employees and any other agents) from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of Copilot, including without limitation your use, sharing, or publication of any Prompt, Responses, or Creations, or your breach of these Terms or violation of applicable law.” In layman’s terms, if you use Copilot as a tool and not just for entertainment (you know, like how it is advertised), then you can’t hold Microsoft liable for any damage it may cause to you personally or the business you work for.
So, let me get this straight — this tool, which has been slated as the next big thing in professional productivity, is so catastrophically unreliable that Microsoft had to not only explicitly state it is just for entertainment purposes but also force users to completely surrender any right to hold them accountable for damages caused by using this unreliable, inaccurate AI as a professional productivity tool. That doesn’t seem right, does it?
Some have pointed out that these ToS only cover Copilot as a personal chatbot and don’t apply to Copilot business tools. But that isn’t really a defence. For one, many businesses still use these “individual” versions of Copilot as a chatbot assistant. Secondly, people also use Copilot as a professional assistant chatbot in both their personal and professional lives, as that is essentially how it has been advertised. But finally, these Copilot business tools are based on the same AI models, so they will have similar issues.
And here is the thing: it isn’t just Microslop pulling this legalese bulls**t.
Anthropic, the less-evil AI company, is also doing this. Their consumer ToS (when viewed from a European IP) states that their services are “non-commercial use only” and that users “agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.” I wonder how many developers vibecoding with Claude Code know that.
To be fair to Anthropic, they are still better than Microsoft. They don’t completely retract any liability they have for damage their models may cause — but they do heavily restrict it. The ToS states that Anthropic’s “total liability to you for any loss or damage arising out of or in connection with these Terms, whether in contract (including under any indemnity), tort (including negligence) or otherwise will be limited to the greater of: (a) the amount you paid to us for access to or use of the Services in the six months prior to the event giving rise to the liability, and (b) £100.”
This is the exact same schtick Microsoft has pulled. The AI is so inaccurate that it can’t be trusted with meaningful tasks, so they claim it can’t be used for commercial purposes (even though that is primarily how it has been marketed) and then retract as much liability as possible for their AI’s actions.
OpenAI is not much better.


