Microsoft Begins Public Offering Of ‘Azure AI Content Safety’, New Moderation Tool Uses AI To “Detect Hateful, Violent, Sexual, And Self-Harm Content In Images And Text”
When any given new technology finally reaches the point of commercial availability, it becomes only a matter of time until said invention ends up being abused by the powers that be to suppress communication – and the machine-learning craze that is commonly referred to as ‘AI’ has officially become the latest tool to go down this road.
On May 23rd, Microsoft publicly unveiled their new Azure AI Content Safety tool, a new multi-language service which employs “cutting edge AI models, it can detect hateful, violent, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review.”
“Azure Content Safety classifies harmful content into four categories: sexual, violent, self-harm, hate,” briefly explained the Xbox parent company of the new product’s functionality.
“It returns with a severity level for each unsafe content category on a scale from 1 – 6,” they continued. “Our AI-powered content moderation solution [also] uses natural language processing techniques to address the meaning and context of language, closely mirroring human intelligence. It can analyze text in both short form and long form.”
According to Microsoft’s own promotional material, there are two key industries that the company believes can benefit from their new tool: Social media and gaming.
“Social media moves fast, and itâs essential that content moderation tools can act in real time,” wrote the tech giant in regards to the tool’s potential applications for social media. “Azure AI Content Safety can monitor content in posts, threads, and chats to help keep online social communities protected from harmful content.”
Turning to video games, Microsoft boasted that while “content moderation for gaming is challenging due to visual and text-based elements, live chat, and at times violent content, Azure AI Content Safety’s advanced computer vision tools help monitor avatars, usernames, images, and chat, making platforms safer and supporting human moderators.”
Further, said material also note that the company has already begun “leveraging Azure AI Content Safety in Microsoft products such as Azure OpenAI, Azure ML, GitHub Copilot, Bing, and more to help detect potentially harmful content.”
In a statement provided to TechCrunch, a Microsoft spokesperson recalled, “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems werenât effectively taking into account context or able to work in multiple languages.”
“New [AI] models are able to understand content and cultural context so much better,” they explained of the company’s decision to invest in Azure. “They are multilingual from the start ⌠and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.â
A separate spokesperson further explained, “We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context.”
“We then trained the AI models to reflect these guidelines,” they said, before offering the caveat-slash-advice that “AI will always make some mistakes, so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.â
Per the tech news outlet, Azure’s subscription-based pricing plans start “at $1.50 per 1,000 images and $0.75 per 1,000 text records.”
More About:Video Games