Curated by THEOUTPOST
On Wed, 25 Sept, 12:06 AM UTC
2 Sources
[1]
Microsoft unveils 'trustworthy AI' features to fix hallucinations and boost privacy
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Microsoft unveiled a suite of new artificial intelligence safety features on Tuesday, aiming to address growing concerns about AI security, privacy, and reliability. The tech giant is branding this initiative as "Trustworthy AI," signaling a push towards more responsible development and deployment of AI technologies. The announcement comes as businesses and organizations increasingly adopt AI solutions, bringing both opportunities and challenges. Microsoft's new offerings include confidential inferencing for its Azure OpenAI Service, enhanced GPU security, and improved tools for evaluating AI outputs. "To make AI trustworthy, there are many, many things that you need to do, from core research innovation to this last mile engineering," said Sarah Bird, a senior leader in Microsoft's AI efforts, in an interview with VentureBeat. "We're still really in the early days of this work." Combating AI hallucinations: Microsoft's new correction feature One of the key features introduced is a "Correction" capability in Azure AI Content Safety. This tool aims to address the problem of AI hallucinations -- instances where AI models generate false or misleading information. "When we detect there's a mismatch between the grounding context and the response... we give that information back to the AI system," Bird explained. "With that additional information, it's usually able to do better the second try." Microsoft is also expanding its efforts in "embedded content safety," allowing AI safety checks to run directly on devices, even when offline. This feature is particularly relevant for applications like Microsoft's Copilot for PC, which integrates AI capabilities directly into the operating system. "Bringing safety to where the AI is is something that is just incredibly important to make this actually work in practice," Bird noted. Balancing innovation and responsibility in AI development The company's push for trustworthy AI reflects a growing industry awareness of the potential risks associated with advanced AI systems. It also positions Microsoft as a leader in responsible AI development, potentially giving it an edge in the competitive cloud computing and AI services market. However, implementing these safety features isn't without challenges. When asked about performance impacts, Bird acknowledged the complexity: "There is a lot of work we have to do in integration to make the latency make sense... in streaming applications." Microsoft's approach appears to be resonating with some high-profile clients. The company highlighted collaborations with the New York City Department of Education and the South Australia Department of Education, which are using Azure AI Content Safety to create appropriate AI-powered educational tools. For businesses and organizations looking to implement AI solutions, Microsoft's new features offer additional safeguards. However, they also highlight the increasing complexity of deploying AI responsibly, suggesting that the era of easy, plug-and-play AI may be giving way to more nuanced, security-focused implementations. The future of AI safety: Setting new industry standards As the AI landscape continues to evolve rapidly, Microsoft's latest announcements underscore the ongoing tension between innovation and responsible development. "There isn't just one quick fix," Bird emphasized. "Everyone has a role to play in it." Industry analysts suggest that Microsoft's focus on AI safety could set a new standard for the tech industry. As concerns about AI ethics and security continue to grow, companies that can demonstrate a commitment to responsible AI development may gain a competitive advantage. However, some experts caution that while these new features are a step in the right direction, they are not a panacea for all AI-related concerns. The rapid pace of AI advancement means that new challenges are likely to emerge, requiring ongoing vigilance and innovation in the field of AI safety. As businesses and policymakers grapple with the implications of widespread AI adoption, Microsoft's "Trustworthy AI" initiative represents a significant effort to address these concerns. Whether it will be enough to allay all fears about AI safety remains to be seen, but it's clear that major tech players are taking the issue seriously.
[2]
Microsoft Boosts AI Systems Security With Hallucination Correction, Confidential Inferencing
'We all need and expect AI we can trust,' Microsoft EVP and CMO Takeshi Numoto said. Microsoft introduced a series of new product capabilities aimed at making artificial intelligence systems more secure, including a correction capability in Azure AI Content Safety for fixing hallucination issues in real time and a preview for confidential inferencing capability in the Azure OpenAI Service Whisper model. The new capabilities are meant "to help ensure that AI systems are more secure, private and safe," Takeshi Numoto, the Redmond, Wash.-based vendor's executive vice president and chief marketing officer, said in a statement Tuesday. "We all need and expect AI we can trust," Numoto said. "With new capabilities that further security, privacy and safety, we empower organizations to create AI solutions that are trustworthy and continue to build a future where trust in AI is paramount." [RELATED: Microsoft: Buying Three Mile Island Nuclear Power Will Help 'Carbon-Free Energy' Goal] The product updates coincide with a progress report Microsoft released this week for its Secure Future Initiative and news that Microsoft's long-term AI investment now includes buying power from the infamous Three Mile Island nuclear site in Pennsylvania. The vendor -- which has more than 400,000 partners worldwide -- also detailed the second wave of iteration around its Copilot brand of AI tools earlier this month. During Salesforce's annual Dreamforce conference last week, the AI rival's CEO and co-founder Marc Benioff criticized Microsoft's Copilot strategy and the hallucination rate of other AI offerings. As part of the new Microsoft updates, the vendor has made generally available (GA) Azure confidential virtual machines with Nvidia H100 Tensor Core graphics processing units (GPUs). Microsoft positions these VMs as provisioning users data security directly on the GPU. Confidential computing is used for keeping customer data encrypted and protected in secure environments, and confidential inferencing brings that security and privacy to the process of trained AI models making predictions and decisions based on new data. In preview is a confidential inferencing capability in the Azure OpenAI Service Whisper model. This is meant to allow users to develop generative AI apps that support verifiable end-to-end privacy, according to Microsoft. Capabilities "coming soon" for users include transparency into web queries for Microsoft 365 Copilot administrators and users plus SharePoint Advanced Management updates around data oversharing, according to Microsoft. Transparency into web queries aims to help with figuring out how web search enhances Copilot responses. New evaluations in Azure AI Studio are meant to support proactive risk assessments, according to the Redmond, Wash.-based tech giant. The evaluations should help users assess output quality and relevancy and how often an AI application puts out protected material. Azure AI Content Safety's Groundedness Detection feature gained a correction capability for fixing hallucination issues in real time before customers see them, according to Microsoft. Users also gained the ability to embed Azure AI Content Safety on devices that may have intermittent or no cloud connectivity. These capabilities are in public preview. John Snyder, CEO of Durham, N.C.-based solution provider Net Friends, told CRN in an interview that he looks forward to Microsoft adding more AI-powered security into its Defender for Endpoints and the extended detection and response (XDR) space, a space where Microsoft partners including Snyder's business participate. "There's so much potential to aggregate all the security alerts and actionable tasks" within Defender, Snyder said.
Share
Share
Copy Link
Microsoft introduces innovative AI features aimed at addressing hallucinations, improving security, and enhancing privacy in AI systems. These advancements are set to revolutionize the trustworthiness and reliability of AI applications.
Microsoft has taken a significant step forward in the realm of artificial intelligence by introducing a suite of new features designed to enhance the trustworthiness, security, and privacy of AI systems. These innovations come at a crucial time when concerns about AI reliability and data protection are at the forefront of public discourse 1.
One of the key challenges in AI development has been the issue of "hallucinations" - instances where AI models generate false or misleading information. Microsoft's new features aim to tackle this problem head-on. The company has introduced a capability that allows AI models to provide citations for their responses, enabling users to verify the information's accuracy 1. This feature not only enhances transparency but also builds user confidence in AI-generated content.
Privacy concerns have been a significant barrier to AI adoption in many sectors. Microsoft's latest offerings include "confidential AI" capabilities, which allow organizations to run AI workloads on encrypted data without exposing the underlying information 2. This advancement is particularly crucial for industries dealing with sensitive data, such as healthcare and finance.
To further improve AI accuracy, Microsoft has developed a feature that grounds language models in real-time data. This capability ensures that AI responses are based on the most current and relevant information available, reducing the risk of outdated or inaccurate outputs 1.
These new features are not just theoretical improvements; they have practical applications for businesses and developers. Microsoft is integrating these capabilities into its Azure AI platform, making them accessible to a wide range of users 2. This integration allows organizations to leverage advanced AI technologies while maintaining high standards of security and reliability.
Microsoft's latest innovations represent a significant step towards creating more trustworthy and reliable AI systems. By addressing key concerns such as hallucinations, privacy, and data accuracy, the company is paving the way for wider AI adoption across various industries. As these technologies continue to evolve, we can expect to see an increase in AI applications that are not only powerful but also responsible and trustworthy.
Microsoft introduces a new AI-powered tool designed to identify and correct factual errors in AI-generated content. The technology aims to enhance the reliability of AI outputs, but experts warn of potential limitations.
5 Sources
5 Sources
Microsoft introduces a groundbreaking AI correction feature designed to address the issue of AI hallucinations. This development promises to enhance the reliability of AI-generated content across various applications.
2 Sources
2 Sources
Microsoft introduces AI-powered security agents to assist overwhelmed cybersecurity teams, aiming to automate high-volume tasks and improve threat response times.
11 Sources
11 Sources
Microsoft's AI Red Team, after probing over 100 generative AI products, highlights the amplification of existing security risks and the emergence of new challenges in AI systems. The team emphasizes the ongoing nature of AI security work and the crucial role of human expertise in addressing these evolving threats.
4 Sources
4 Sources
Microsoft announces two new custom-designed chips for data centers, along with advanced cooling and power delivery technologies, to enhance AI capabilities, security, and energy efficiency in its Azure cloud infrastructure.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved