Microsoft Introduces Innovative Responsible AI Features to Enhance Trust and Security in AI Systems

Posted by

1. Microsoft recently unveiled new Responsible AI features aimed at enhancing trust in AI systems during the AI Tour in Mexico City. With a focus on privacy, safety, and security, these features include risk evaluations and content safety tools. The initiative emphasizes transparency and data control, ensuring responsible AI development.2.

New Responsible AI Features for Building Trustworthy AI

Microsoft recently announced exciting updates aimed at enhancing trust in AI systems. These features focus on privacy, safety, and security.

What’s New?

During the Microsoft AI Tour in Mexico City, the company introduced its “Trustworthy AI” initiative. This approach emphasizes a commitment to responsible AI development.

“Every AI innovation at Microsoft is grounded in a comprehensive set of AI principles, policies, and standards.”

This initiative includes foundational commitments like the Secure Future Initiative and AI Principles. These frameworks ensure users have control over their data, whether at rest or in transit.

Major Updates

Microsoft unveiled four new capabilities in public preview to enhance the evaluation of generative AI applications:

  • Risk and safety evaluations for indirect prompt injection attacks.
  • Evaluations for protected material (text).
  • Math-based metrics like ROUGE, BLEU, METEOR, and GLEU.
  • A synthetic data generator for non-adversarial tasks.

These tools provide a systematic approach to evaluate AI outputs, moving beyond intuition and sporadic feedback.

What’s Important to Know

A significant change involves migrating evaluators from the promptflow-evals package to the new Azure AI Evaluation SDK. Users should prepare a migration plan to avoid errors related to missing inputs.

“We’re transparent about where data is located and how it’s used, and we’re committed to making sure AI systems are developed responsibly.”

For developers in regulated sectors, data protection remains a priority. New features include:

  • Confidential inferencing.
  • Azure OpenAI Data Zones.

These features address the challenges of processing sensitive data in the cloud while ensuring encryption at all times.

Conclusion

Microsoft’s commitment to responsible AI is evident in these new features. By prioritizing privacy, safety, and security, they aim to build trust in AI systems. Developers and enterprises can now leverage these tools to create applications that meet high standards.

  • Microsoft’s Trustworthy AI initiative promotes a unified approach to privacy and security in AI.
  • New capabilities include risk evaluations and a synthetic data generator for improved AI application outputs.
  • The Azure AI Evaluation SDK replaces the previous promptflow-evals package for better evaluation processes.
  • Azure AI Content Safety introduces features like protected material detection and correction capabilities.
  • Confidential inferencing and Azure OpenAI Data Zones address data protection in regulated sectors.
  • From the Microsoft Developer Community Blog



    Related Posts
    Unlock the Power of the Platform: Your Guide to Power Platform at Microsoft Ignite 2022

    Microsoft Power Platform is leading the way in AI-generated low-code app development. With the help of AI, users can quickly Read more

    Unlock the Power of Microsoft Intune with the 2210 October Edition!

    Microsoft Intune is an enterprise mobility management platform that helps organizations manage mobile devices, applications, and data. The October edition Read more

    Unlock the Power of Intune 2.211: What’s New for November!

    Microsoft Intune has released its November edition, featuring new updates to help IT admins better manage their organization’s mobile devices. Read more

    Unlock the Power of Microsoft Edge on Intune-Managed Shared Android Devices

    Microsoft Intune now supports Microsoft Edge on Android devices, allowing organizations to provide a secure and productive experience for their Read more