4 Sources
4 Sources
[1]
Microsoft says Copilot is for entertainment purposes only, not serious use -- firm pushing AI hard to consumers tells users not to rely on it for important advice
These might be boilerplate disclaimers, but they kind of contradict the company's ads and marketing. Microsoft used to push its AI services towards its user base, especially with the launch of the Copilot+ PC, but it seems that even the company itself does not trust its creation. According to the Microsoft Copilot Terms of Use, which was updated in October last year, the AI large language model (LLM) is designed for entertainment use only, and users should not use it for important advice. While this may be a boilerplate disclaimer, it's quite ironic given how hard the company wants people to use Copilot for business uses and has integrated it into Windows 11. "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended," the document said. "Don't rely on Copilot for important advice. Use Copilot at your own risk." This isn't limited to Copilot, too. Other AI LLMs have similar disclaimers. For example, xAI says "Artificial intelligence is rapidly evolving and is probabilistic in nature; therefore, it may sometimes: a) result in Output that contains "hallucinations," b) be offensive, c) not accurately reflect real people, places or facts, or d) be objectionable, inappropriate, or otherwise not suitable for your intended purpose." These may sound common sense for people familiar with how LLMs work, but, unfortunately, some people treat AI output as gospel, even those who are supposed to know better. We've seen this with Amazon's services, after some AWS outages were reportedly caused by an AI coding bot after engineers let it solve an issue without oversight. The Amazon website itself has also been hit with a few "high blast radius" incidents that were linked to "Gen-AI assisted changes," resulting in senior engineers being called up in a meeting to resolve the matter. While generative AI is a useful tool and can indeed increase productivity, it's still just a tool that offers no accountability for any mistakes that it might make. Because of this, people who use it must be careful to always doubt its output and double-check its results. But even if you're aware of the limitations of current AI technology, humans are susceptible to automation bias, wherein we tend to favor the results that machines produce and ignore data that might contradict that. AI could make this phenomenon more severe, especially as it can create results that look plausible or even true with a cursory glance. Companies in general usually add disclaimers like these to their products and services to protect themselves from lawsuits. But as AI tech companies push their AI services as the ultimate productivity hack, they might be minimizing the risks attached to the use of AI tools just to get customers paying and recoup the billions they've invested in hardware and talent. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[2]
Even Microsoft know Copilot can't be trusted
Terms admit it is for entertainment only and may get things wrong A recent surge of interest in Microsoft's Terms of Use for Copilot is a reminder that AI helpers are really just a bit of fun. Despite the last update taking place in late 2025, the document for Copilot for Individuals recently attracted new attention from netizens. It includes this gem: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." Regular readers of The Register won't be shocked by Microsoft's warning that Copilot gets things wrong and should not be relied on. The company itself has long acknowledged the assistant's limitations. During the London leg of its AI tour, for example, every demonstration of Copilot wizardry came with a warning that the tool could not be fully trusted and that human verification was required. The same applies to any other AI assistant: they can be useful, but their output still needs checking, particularly on anything consequential like medical advice or an investment plan. As one commenter on Hacker News pointed out, "Anthropic does a somewhat similar thing. If you visit their ToS (the one for Max/Pro plans) from a European IP address, they replace one section with this: Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity." (The Register checked this from a US and a European IP and can confirm this is the case.) "The commenter added: "It's funny that a plan called 'Pro' cannot be used professionally." As for CoPilot's Terms of Use, they may not be new, but the attention is useful for two reasons. It is a reminder to read the text users so often click through, and it underlines that chatbots such as Copilot are neither companions nor dependable sources of advice. Instead, they are error-prone tools that can be helpful one moment and confidently wrong the next. Some in the tech industry may market AI assistants as though they put a genius in every laptop, but Microsoft's own warning is rather less grand: "It can make mistakes, and it may not work as intended." Copilot for Individuals may be for entertainment purposes only. Microsoft 365 Copilot, meanwhile, can be just as inaccurate, only with fewer laughs. ®
[3]
Even Microsoft knows Copilot shouldn't be trusted with anything important - General Chat
A recent surge of interest in Microsoft's Terms of Use for Copilot is a reminder that AI helpers are really just a bit of fun. Despite the last update taking place in late 2025, the document for Copilot for Individuals recently attracted new attention from netizens. It includes this gem: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." Regular readers of The Register won't be shocked by Microsoft's warning that Copilot gets things wrong and should not be relied on. The company itself has long acknowledged the assistant's limitations. During the London leg of its AI tour, for example, every demonstration of Copilot wizardry came with a warning that the tool could not be fully trusted and that human verification was required.
[4]
'Copilot is for entertainment purposes only': Even Microsoft's official terms and conditions say you really shouldn't be using its AI at work
* Microsoft has clarified some of the terms and conditions associated with Copilot * Responsibilities have been shifted onto the users for the AI tool * Despite being for "entertainment purposes," it's still heavily marketed toward workers In a major twist of events, Microsoft has re-affirmed Copilot is for "entertainment purposes only" and that, if used for work, it should be used as the first of multiple stages of fact-checking, rather than being relied upon. "It can make mistakes, and it may not work as intended," the company wrote. "Don't rely on Copilot for important advice. Use Copilot at your own risk." Though the company very much wants businesses and employees to continue using Copilot for work, there's a clear shift in responsibility to the user here, clearing Microsoft of any accusations of false information. Microsoft says "use Copilot at your own risk" In a roundabout way, Microsoft is effectively admitting to the risk of AI hallucination amid ongoing concerns about copyrighted content, IP ambiguity and output legitimacy. With this in mind, the company clearly wants us to think of Copilot as a tool, not a decision-maker, and for users to independently fact-check outputs and be cautious with any sensitive, protected data. "You agree to indemnify us and hold us harmless... from and against any claims, losses, and expenses... arising from or relating to your use of Copilot," Microsoft added in another paragraph. More broadly, the company also notes that prompts and responses may be used to improve Copilot, however enterprise versions have additional protections to safeguard sensitive information. In other words, users retain the rights to their inputs, however Microsoft still has the right to use the data for improving the service. However, while Microsoft's efforts to push some responsibility onto the users' shoulders has hit the spotlights, it's not the only company with such terms. OpenAI, Google and Anthropic all state similar advisories in their terms, including user responsibility and no guarantee of accuracy. The shift in responsibility from AI vendor to user is an ongoing change that companies are asserting as the industry still works out what the legal risks could be, but with Microsoft still selling Copilot tools to business users and consumers, it's clearly a term rewording exercise than a total shift in behavior. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Share
Share
Copy Link
Microsoft's Copilot Terms of Use, updated in late 2025, classify the AI tool as for entertainment purposes only, warning users not to rely on it for important advice. The disclaimer contradicts the company's aggressive marketing of Copilot for business productivity and its integration into Windows 11, while similar AI terms and conditions from Google, OpenAI, and Anthropic reveal an industry-wide shift of responsibility onto users.
Microsoft has classified its widely promoted AI assistant as suitable for entertainment purposes only, according to the Copilot Terms of Use updated in late 2025. The document explicitly states: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk"
1
. This disclaimer stands in stark contrast to Microsoft's aggressive push for Copilot adoption in business environments and its deep integration into Windows 111
. The terms represent a clear shift in user responsibility for AI output, effectively shielding Microsoft from liability for errors or AI hallucination incidents.
Source: Tom's Hardware
During Microsoft's AI tour in London, every demonstration of Microsoft Copilot came with warnings that the tool could not be fully trusted and that human verification was required
2
. This acknowledgment of generative AI limitations extends beyond marketing events into the legal framework governing the product. The company's terms explicitly state: "You agree to indemnify us and hold us harmless... from and against any claims, losses, and expenses... arising from or relating to your use of Copilot"4
. This language transfers accountability from the AI vendor to users, a strategic move as the industry navigates uncertain legal terrain around AI-generated content and decision-making.
Source: The Register
Real-world incidents underscore why such disclaimers matter. Amazon experienced AWS outages reportedly caused by an AI coding bot after engineers allowed it to solve issues without proper oversight. The Amazon website also suffered "high blast radius" incidents linked to "Gen-AI assisted changes," requiring senior engineers to intervene . These cases illustrate the tangible AI risks when organizations treat LLM outputs as infallible.
Microsoft isn't alone in implementing protective disclaimers. The AI terms and conditions from major providers reveal a consistent pattern of liability limitation. xAI acknowledges that artificial intelligence "may sometimes: a) result in Output that contains 'hallucinations,' b) be offensive, c) not accurately reflect real people, places or facts, or d) be objectionable, inappropriate, or otherwise not suitable for your intended purpose"
1
. OpenAI, Google, and Anthropic all include similar advisories emphasizing user responsibility and offering no guarantee of accuracy4
.Anthropic takes an particularly notable approach: when accessing their terms from a European IP address, users see a section stating "Non-commercial use only. You agree not to use our Services for any commercial or business purposes" for their Max/Pro plans
2
. As one observer noted, "It's funny that a plan called 'Pro' cannot be used professionally"2
. These disclaimers serve dual purposes: protecting companies from legal claims while acknowledging the fundamental unpredictability of current AI technology.Related Stories
The entertainment-only classification creates a glaring contradiction with how Microsoft markets Copilot. The company heavily promotes the tool as AI for productivity and has positioned Copilot+ PCs as the future of computing. Yet the same technology marketed as a productivity hack carries terms warning against reliance for important decisions
1
. This disconnect raises questions about whether AI companies are minimizing risks to drive adoption and recoup billions invested in hardware and talent.
Source: TechRadar
The phenomenon of automation bias compounds these risks. Humans tend to favor machine-generated results and ignore contradictory data, and AI could intensify this tendency as it produces outputs that appear plausible at first glance
1
. When users treat AI output as authoritative despite warnings that Copilot can't be trusted, the gap between marketed capabilities and actual reliability becomes dangerous.The terms clarify that prompts and responses may be used to improve Microsoft Copilot, though enterprise versions include additional protections for sensitive information
4
. Users retain rights to their inputs, but Microsoft reserves the right to leverage this data for service improvement. Organizations deploying AI tools must establish verification protocols, particularly for consequential decisions involving medical advice, investment planning, or critical infrastructure. The shift represents less a change in company behavior than a rewording exercise to manage legal exposure as the industry determines long-term liability frameworks4
.Summarized by
Navi
[1]
[2]
[3]
18 Nov 2025•Technology

18 Jun 2025•Technology

26 Apr 2025•Technology

1
Science and Research

2
Science and Research

3
Policy and Regulation
