Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
March 28, 2024 08:34 pm

Microsofts new safety system can catch hallucinations in its customers AI apps




Microsoft logo
Illustration: The Verge



Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.


“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a...



Continue reading…




Original Link: https://www.theverge.com/2024/3/28/24114664/microsoft-safety-ai-prompt-injections-hallucinations-azure

Share this article:    Share on Facebook
View Full Article

The Verge

The Verge is an ambitious multimedia effort founded in 2011

More About this Source Visit The Verge