Microsoft Begins Public Offering Of ‘Azure AI Content Safety’, New Moderation Tool Uses AI To “Detect Hateful, Violent, Sexual, And Self-Harm Content In Images And Text”

343 Guilty Spark (Tim Dadabo) refuses to let Master Chief (Steve Downes) prevent the Halo installations from firing in Halo 3 (2007), Bungie
343 Guilty Spark (Tim Dadabo) refuses to let Master Chief (Steve Downes) prevent the Halo installations from firing in Halo 3 (2007), Bungie

When any given new technology finally reaches the point of commercial availability, it becomes only a matter of time until said invention ends up being abused by the powers that be to suppress communication – and the machine-learning craze that is commonly referred to as ‘AI’ has officially become the latest tool to go down this road.

Cortana (Jen Taylor) begins to succumb to rampancy in Halo 4 (2012), Microsoft Studios

Cortana (Jen Taylor) begins to succumb to rampancy in Halo 4 (2012), Microsoft Studios

RELATED: Calls For The Resignation Of Xbox Head Phil Spencer Arise As ‘Redfall’ Is Slayed By Critics And Gamers

On May 23rd, Microsoft publicly unveiled their new Azure AI Content Safety tool, a new multi-language service which employs “cutting edge AI models, it can detect hateful, violent, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review.”

“Azure Content Safety classifies harmful content into four categories: sexual, violent, self-harm, hate,” briefly explained the Xbox parent company of the new product’s functionality.

“It returns with a severity level for each unsafe content category on a scale from 1 – 6,” they continued. “Our AI-powered content moderation solution [also] uses natural language processing techniques to address the meaning and context of language, closely mirroring human intelligence. It can analyze text in both short form and long form.”

Cortana (Jen Taylor) stumbles upon a firewall in Halo 4 (2012), Microsoft Studios

Cortana (Jen Taylor) stumbles upon a firewall in Halo 4 (2012), Microsoft Studios

According to Microsoft’s own promotional material, there are two key industries that the company believes can benefit from their new tool: Social media and gaming.

“Social media moves fast, and it’s essential that content moderation tools can act in real time,” wrote the tech giant in regards to the tool’s potential applications for social media. “Azure AI Content Safety can monitor content in posts, threads, and chats to help keep online social communities protected from harmful content.”

Turning to video games, Microsoft boasted that while “content moderation for gaming is challenging due to visual and text-based elements, live chat, and at times violent content, Azure AI Content Safety’s advanced computer vision tools help monitor avatars, usernames, images, and chat, making platforms safer and supporting human moderators.”

Further, said material also note that the company has already begun “leveraging Azure AI Content Safety in Microsoft products such as Azure OpenAI, Azure ML, GitHub Copilot, Bing, and more to help detect potentially harmful content.”

Cortana (Jen Taylor) duplicates herself in order to stop the Ur-Didact (Keith Szarabajka) from killing Master Chief (Steve Downes) in Halo 4 (2012), Microsoft Studios

Cortana (Jen Taylor) duplicates herself in order to stop the Ur-Didact (Keith Szarabajka) from killing Master Chief (Steve Downes) in Halo 4 (2012), Microsoft Studios

RELATED: Xbox Head Phil Spencer Is Well Aware That “It’s Been Too Long” Since Microsoft Released A Major First Party Title: “In General, We’ve Been A Little Light”

In a statement provided to TechCrunch, a Microsoft spokesperson recalled, “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages.”

“New [AI] models are able to understand content and cultural context so much better,” they explained of the company’s decision to invest in Azure. “They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

Cortana (Jen Taylor) thanks Master Chief (Steve Downes) for taking a chance on her in Halo 4 (2012), Microsoft Studios

Cortana (Jen Taylor) thanks Master Chief (Steve Downes) for taking a chance on her in Halo 4 (2012), Microsoft Studios

A separate spokesperson further explained, “We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context.”

“We then trained the AI models to reflect these guidelines,” they said, before offering the caveat-slash-advice that “AI will always make some mistakes, so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.”

Per the tech news outlet, Azure’s subscription-based pricing plans start “at $1.50 per 1,000 images and $0.75 per 1,000 text records.”

Cortana (Jen Taylor) bids farewell to Master Chief (Steve Downes) in Halo 4 (2012), Microsoft Studios

Cortana (Jen Taylor) bids farewell to Master Chief (Steve Downes) in Halo 4 (2012), Microsoft Studios

NEXT: Minecraft Java Update Introduces Player Chat Reporting, Grants Microsoft Ability To Ban Players From Private Servers

Share: 
Mentioned This Article:

More About: