Curated by THEOUTPOST
On Thu, 26 Sept, 4:05 PM UTC
2 Sources
[1]
The End of AI Hallucinations? Microsoft's 'Correction' Feature
Microsoft recently announced an AI feature called "Correction," which detects and corrects AI hallucinations. Hallucinations are cases where AI models give out false or misleading information. This feature is being added to the system for detecting the AI's groundedness. AI's groudedness is nothing but its ability to be in touch with reality. Azure AI Content Safety will have this feature to make sure AI gives accurate answers. The new Microsoft feature, Correction, is specifically engineered to identify incorrect or unreal statements produced by AI systems in real-time. The feature as its name describes will correct such wrongful information to enhance user experience. Correction Besides grounding documents would be a resource base that AI models use as a repository. This would help ensure the output matches factual data. Whenever the AI system generates a response, the grounding system checks whether the information produced is correct or contains errors. Correction will thus trigger a review if they do detect any misinformation, providing rightful suggestions. Moreover, the user would get control over choosing a reasoning explanation that describes why the AI system flagged specific content as wrong. This would help make the correction process more transparent.
[2]
Could Microsoft's new AI feature really correct hallucinations?
Microsoft claims it has a new capability that detects and corrects false or misleading statements from AI. Microsoft unveiled a new artificial intelligence (AI) feature this week that it says will help to correct models' false statements. The new "Correction" capability will identify AI output inaccuracies and fix them, according to the technology giant. So-called AI hallucinations will be corrected in real-time "before users of generative AI applications encounter them," Microsoft said, with a spokesperson calling it a "new first-of-its-kind capability". The feature works by scanning and highlighting the inaccurate part of a response. It can then generate a response about why the segment is wrong and use generative AI to correct the section to make sure "that the rewritten content better aligns with connected data sources," a Microsoft spokesperson said. It's a part of Microsoft's Azure AI Content Safety software interface, which can also now be embedded on devices. AI models are trained on extensive datasets to make predictions, but they can also "hallucinate," which means they generate incorrect or false statements. This can be due to incomplete or biased training data. Jesse Kommandeur, a strategic analyst at the Hague Centre for Strategic Studies, compares it to baking a cake without the full recipe - where you guess based on prior experiences what may work. Sometimes the cake comes out well but other times it doesn't. "The AI is trying to 'bake' the final output (like a text or decision) based on incomplete information ('recipes') it has learned from," Kommandeur said in an email. There have been many high-profile examples of AI chatbots providing false or misleading answers, from lawyers submitting fake legal cases after using an AI model to Google's AI summaries providing misleading and inaccurate responses earlier this year. An analysis by the company Vectara last year found that AI models hallucinated between 3 and 27 per cent of the time depending on the tool. Meanwhile, non-profit Democracy Reporting International said ahead of the European elections that none of the most popular chatbots provided "reliably trustworthy" answers to election-related queries. Generative AI "doesn't really reflect and plan and think. It just responds sequentially to inputs... and we've seen the limitations of that," said Vasant Dhar, a professor at New York University's Stern School of Business and Center for Data Science in the US. "It's one thing to say [the new correction capability] will reduce hallucinations. It probably will, but it's really impossible to get it to eliminate them altogether with the current architecture," he added. Ideally, Dhar added, a company would want to be able to claim it reduces a certain percentage of hallucinations. "That would require a huge amount of data on known hallucinations and testing to see if this little prompt engineering method actually reduces them. That's actually a very tall order, which is why they haven't made any kind of quantitative claim about how much it reduces hallucinations". Kommandeur looked at a paper Microsoft confirmed was published about the correction feature and said while it "looks promising and chooses a methodology I haven't seen before, it's likely that the technology is still evolving and may have its limitations". Microsoft says hallucinations have held back AI models in high-stakes fields. such as medicine, as well as for their broader deployment. "All of these technologies including Google Search are technologies where these companies just continue to make incremental improvements in the product," said Dhar. "That's kind of the mode once you have the main product ready, then you keep improving it," he said. "From my perspective, in the long term, investment in AI can become a liability if the models keep hallucinating, especially if these errors keep leading to misinformation, flawed decision-making etc," said Kommandeur. "In the short term, however, I think the [large language models] LLMs add so much value to the daily lives for a lot of people in terms of efficiency, that the hallucinations are something we seem to take for granted," he said.
Share
Share
Copy Link
Microsoft introduces a groundbreaking AI correction feature designed to address the issue of AI hallucinations. This development promises to enhance the reliability of AI-generated content across various applications.
In a significant leap forward for artificial intelligence technology, Microsoft has unveiled a new AI correction feature aimed at combating the persistent problem of AI hallucinations. This development comes as a response to growing concerns about the reliability and accuracy of AI-generated content 1.
AI hallucinations occur when language models generate false or misleading information, presenting it as factual. This phenomenon has been a major hurdle in the widespread adoption and trust of AI systems, particularly in critical applications where accuracy is paramount 2.
The new feature employs a sophisticated approach to identify and rectify potential hallucinations:
This process aims to significantly reduce the occurrence of false information in AI outputs, enhancing the overall reliability of the technology.
Microsoft's correction feature has far-reaching implications across various sectors:
While promising, the technology is not without its challenges:
The announcement has generated significant interest in the tech industry. Competitors are likely to develop similar features, potentially leading to a new standard in AI reliability. As the technology evolves, it could pave the way for more trustworthy and widely adopted AI systems across various domains 12.
Microsoft's AI correction feature represents a significant step towards more reliable and trustworthy AI systems. As the technology continues to develop and be tested in real-world scenarios, it has the potential to reshape our interaction with and reliance on AI-generated content across numerous fields.
Reference
[1]
[2]
Microsoft introduces a new AI-powered tool designed to identify and correct factual errors in AI-generated content. The technology aims to enhance the reliability of AI outputs, but experts warn of potential limitations.
5 Sources
5 Sources
Microsoft introduces innovative AI features aimed at addressing hallucinations, improving security, and enhancing privacy in AI systems. These advancements are set to revolutionize the trustworthiness and reliability of AI applications.
2 Sources
2 Sources
An exploration of AI hallucinations, their causes, and potential consequences across various applications, highlighting the need for vigilance and fact-checking in AI-generated content.
8 Sources
8 Sources
Amazon Web Services introduces Automated Reasoning checks to tackle AI hallucinations and Model Distillation for creating smaller, efficient AI models, along with multi-agent collaboration features in Amazon Bedrock.
7 Sources
7 Sources
AI hallucinations, while often seen as a drawback, offer valuable insights for businesses and healthcare. This article explores the implications and potential benefits of AI hallucinations in various sectors.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved