As the all-knowing, all-powerful reach of algorithms continues to extend across every aspect of digital media, FACEIT is taking machine learning to a new frontier: toxic game chat. The company has partnered with Google Cloud and Jigsaw (formerly Google Labs) to build an AI dedicated to rooting out and banning toxic players. It’s already in use, and it has already banned over 20,000 Counter-Strike: Global Offensive players.

The FACEIT AI is called Minerva, and “after months of training to minimize false positives,” it went into live use on the FACEIT platform in late August. Since then, the AI has issued 90,000 warnings and 20,000 bans against abusive chat and spam, all “without manual intervention.”

“If a message is perceived as toxic in the context of the conversation,” FACEIT explains in a blog post, “Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam. Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.”

Original source: https://www.pcgamesn.com/counter-strike-global-offensive/csgo-faceit-ban