Microsoft is launching a new AI-powered moderation service called Azure AI Content Safety to detect inappropriate content across images and text.

It understands text in English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognised that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson told TechCrunch in an email. 

At the company’s annual Build conference, the tech giant’s AI lead Sarah Bird announced, “We are now launching it as a product that third-party customers can use.”

The TechCrunch report noted that Koo is the early adopter of Azure AI Content Safety, and the blogging platform will tackle moderation challenges, including analysing memes.