Microsoft Correction tool revealed, app finds and fixes AI hallucinations, part of the Azure AI Content Safety API

Microsoft Correction tool revealed, app finds and fixes AI hallucinations, part of the Azure AI Content Safety API

User avatar placeholder
Written by Dave W. Shanahan

September 25, 2024

Microsoft has introduced a new tool called Correction, designed to automatically revise factually incorrect AI-generated text. The Microsoft Correction tool, part of Microsoft’s Azure AI Content Safety API, uses a two-step process to flag and correct hallucinations, which are false or misleading statements generated by AI models.

Microsoft’s Correction tool

The Correction tool works by first identifying potentially incorrect or fabricated text snippets using a classifier model. If hallucinations are detected, a second model, which utilizes both small and large language models, attempts to correct these errors by aligning the text with verified information, known as “grounding documents.”

Microsoft’s new Correction tool is powered by a new process that uses small and large language models to align outputs with grounding documents. This feature is designed to support developers and users of generative AI in fields such as medicine, where accuracy is crucial.

However, experts caution that this approach may not address the root cause of AI hallucinations. Os Keyes, a PhD candidate at the University of Washington, notes that trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water, as it is an essential component of how the technology works.

Moreover, experts warn that the Correction tool may create a false sense of security among users, who might perceive it as a safeguard against erroneous outputs and misinformation. Mike Cook, a research fellow at Queen Mary University, argues that even if Correction works as advertised, it threatens to compound the trust and explainability issues around AI.

What’s next

Microsoft Correction tool revealed, app finds and fixes AI hallucinations, part of the Azure AI Content Safety API
This picture is not real.

Microsoft’s Correction tool is a significant step in addressing the issue of AI hallucinations, which has been a major concern in the AI community. AI hallucinations occur when AI models generate text and other content that appears plausible but is factually incorrect or irrelevant. This happens because AI models work by statistically predicting the next word in a sequence based on patterns learned from extensive datasets.

Microsoft’s Correction tool is part of a broader effort to enhance the security, safety, and privacy of AI systems. The company has also introduced a series of updates aimed at enhancing the security, safety, and privacy of AI systems, including the launch of new Evaluations in Azure AI Studio, which support proactive risk assessments, and updates to Microsoft 365 Copilot, providing transparency into web queries to help users understand how search data influences Copilot responses.

Microsoft’s Correction tool is a significant development in addressing the issue of AI hallucinations. However, experts caution that this approach may not address the root cause of AI hallucinations and could create a false sense of security. As AI continues to play a larger role in our lives, it is essential to develop tools and strategies that can effectively address the issue of AI hallucinations and ensure the accuracy and reliability of AI-generated content.


Discover more from Microsoft News Now

Subscribe to get the latest posts sent to your email.

Image placeholder

I'm Dave W. Shanahan, a Microsoft enthusiast with a passion for Windows, Xbox, Microsoft 365 Copilot, Azure, and more. I started MSFTNewsNow.com to keep the world updated on Microsoft news. Based in Massachusetts, you can email me at davewshanahan@gmail.com.