Azure AI Studio enhances responsible AI use with its content filtering system, powered by Azure OpenAI Service. This system detects harmful content across various categories and severity levels, supporting multiple languages. Users can customize filtering settings, ensuring tailored protection against inappropriate content.

Content Filtering with Azure AI Studio: A New Era of Safety
Microsoft has rolled out an innovative content filtering system within Azure AI Studio. This feature aims to enhance user safety while engaging with AI products. The Azure OpenAI Service now includes a robust content filtering system that works in tandem with core models.
What’s New?
The content filtering system is powered by Azure AI Content Safety. It utilizes an ensemble of classification models designed to detect harmful content. This system categorizes risks into four main areas: hate, sexual content, violence, and self-harm.
Moreover, the filtering operates across four severity levels: safe, low, medium, and high. Interestingly, it supports multiple languages, including English, German, Japanese, and Spanish. However, users should note that quality may vary in other languages.
“The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories.”
Major Updates
Azure AI Studio now allows users to modify content filters according to their specific application needs. By default, the system filters content at the medium severity level. However, users can adjust these settings for tailored experiences.
Notably, only customers approved for modified content filtering have full control. They can configure filters to high severity or even disable them entirely. Interested users can apply for modified content filters through the Azure OpenAI Limited Access Review.
“Models available through Models as a Service have content filtering enabled by default and can’t be configured.”
What’s Important to Know?
To test the default content filtering, users can navigate to the Chat Playground in Azure AI Studio. Here, they can input various prompts to see how the system responds to potentially inappropriate content.
For instance, when testing with explicit queries, the model effectively identifies and filters out inappropriate responses. This feature is crucial for applications in sensitive sectors like retail customer care.
Furthermore, users can create custom content filters to enhance safety. By lowering the severity threshold for sexual content, any flagged material can be blocked. This additional layer of protection is vital for maintaining a safe user environment.
Conclusion
The new content filtering capabilities in Azure AI Studio represent a significant advancement in responsible AI usage. By prioritizing user safety, Microsoft continues to lead the way in AI innovation.
From the Microsoft Developer Community Blog