Representative image created using AI

Microsoft introduces new feature which can automatically correct inaccurate AI content

Microsoft has launched a new feature called 'Correction' to automatically detect and rectify false information generated by AI. This tool aims to address the issue of AI hallucinations and improve the accuracy of AI-generated content.

by · India Today

In Short

  • Microsoft introduces new tool to tackle AI hallucinations
  • The tool called Correction is aimed at improve the accuracy of AI-generated content
  • The tool is part of Microsoft's Azure AI Content Safety API

AI Chatbots like ChatGPT, Gemini, and Copilot are helping users access information quickly. However, ever since the arrival of AI chatbots like ChatGPT in 2022, a significant issue concerning generative AI language models has been in highlight. This issue is the tendency of AI models to hallucinate, or present false information. To address this problem, Microsoft has announced a new feature called "Correction," which, according to the company, will automatically detect and rectify false information generated by AI.

This new tool is part of Microsoft's Azure AI Content Safety API. Microsoft states that the new feature is designed to identify and correct factually incorrect or misleading information produced by AI systems. “Correction is a capability in Microsoft Azure AI Content Safety’s Groundedness detection feature that helps fix hallucination issues in real time before users see them,” Microsoft wrote in an official blog post.

What is AI Hallucination?

AI hallucinations happen when these large language models generate text and other content that appears plausible but is factually incorrect or irrelevant. This happens because these language models work by statistically predicting the next word in a sequence based on patterns learnt from extensive datasets. Since AI models cannot think independently and rely on the data they are trained on, they lack an inherent understanding of facts and may produce responses that sound accurate but lack any grounding in truth.

To address the issue of hallucinations and the misinformation that can result, Microsoft has introduced its new Correction feature. This feature tackles AI hallucinations with a classifier model that flags potentially incorrect or fabricated snippets of AI-generated text. If hallucinations are detected, a second model, which utilises both small and large language models, attempts to correct these errors by aligning the text with verified information, known as “grounding documents.”

Microsoft’s new Correction tool is powered by a new process that uses small and large language models to align outputs with grounding documents. “We hope this new feature supports builders and users of generative AI in fields such as medicine, where developers must ensure the accuracy of responses,” explained a Microsoft spokesperson in an interview with TechCrunch.

Microsoft will allow companies to integrate its new correction tool with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4.

Alongside the correction feature, Microsoft has also introduced a series of updates aimed at enhancing the security, safety, and privacy of AI systems. The company has expanded its Secure Future Initiative (SFI), emphasising three core principles: secure by design, secure by default, and secure operations. This includes the launch of new Evaluations in Azure AI Studio, which support proactive risk assessments, and updates to Microsoft 365 Copilot, providing transparency into web queries to help users understand how search data influences Copilot responses.

Additionally, Microsoft is also addressing privacy concerns with the introduction of confidential inferencing in its Azure OpenAI Service Whisper model. This feature ensures that sensitive customer data remains secure and private during the inference process, making it particularly beneficial for industries like healthcare and finance, where data protection is paramount.