Activision Blizzard Seeking To Implement AI Moderation And Stricter Rules To Combat Bad Behavior Following Criticism From The Anti-Defamation League

Dr. Neo Cortex (Lex Lang) betrays Crash (Scott Whyte) and Coco (Eden Riegel) via Crash Bandicoot 4: It's About Time (2020), Activision

Dr. Neo Cortex (Lex Lang) betrays Crash (Scott Whyte) and Coco (Eden Riegel) via Crash Bandicoot 4: It's About Time (2020), Activision

Activision Blizzard is seeking to implement AI moderation and stricter rules to combat hateful and discriminatory players, after US senators wrote to major gaming companies over a critical report recently published by the Anti-Defamation League (ADL).

RELATED: Ubisoft And Riot Games Announce Joint Project To Preemptively Stop “Disruptive Behavior” With Trained AI

In the first of two announcements, Caltech announced they were working with Activision Publishing to develop AI-based moderation. Two Caltech researchers — namely an AI expert and a political scientist — will collaborate with the publisher on “a two-year research project that aims to create an AI that can detect abusive online behavior and help the company’s support and moderation teams to combat it.”

The pair, Professor Anima Anandkumar and Professor Michael Alvarez, had previously worked together on training AI programs to detect trolling on social media platforms. “This new direction, with our colleagues at Activision gives us an opportunity to apply what we have learned to study toxic behavior in a new and important area—gaming,” Professor Alvarez explains.

Professor Anandkumar admits the research has two important questions that must be addressed: “How do we enable AI that is transparent, beneficial to society, and free of biases?” and “How do we ensure a safe gaming environment for everyone?”

Working with Activision however, both professors will have access to their data on how online players interact, along with their specialist knowledge.

“Our teams continue to make great progress in combating disruptive behavior,” Activision Chief Technology Officer Michael Vance boasted, “and we also want to look much further down the road. This collaboration will allow us to build upon our existing work and explore the frontier of research in this area.”

In addition, Activision Blizzard declared their “zero tolerance for hate or discrimination in our online communities,” and how they were going to tackle them in their “Playing Our Part: A Welcoming Game Community” blogpost. 

Citing a recent report from the ADL regarding hate and harassment, Activison Blizzard states that as “bad actors and abusive behaviors across all online platforms are multiplying,” they insist that “inclusivity and safety are top priorities both in our workplace and throughout our player communities.”

RELATED: Study Claims Almost 2,000 “Predatory” Twitch Users Systematically Targeted Over 250,000 Children And Young Teens

Activision Blizzard declares that after the report they “have since been engaged with ADL’s team to discuss their findings, have an open dialogue, and share what we’ve been doing to foster a safe online community. We know this is going to be an ongoing effort and welcome collaboration with the ADL.”

“We also reached out because we believe our industry can benefit by engaging with experts who, like us, are working hard to build a healthier and safer online community for all.” Activision Blizzard discloses that their efforts have included a new Code of Conduct for the entire Call of Duty franchise, World of Warcraft’s social contract, Overwatch 2’s Defense Matrix, machine learning for player reports, amongst other measures.

“In the end, we at Activision Blizzard see ourselves as stewards of our online community,” the company pressed. “Yes, we might be a really big community — with hundreds of millions of members — but those of us who create games do so because we share a fundamental aspiration: to have fun, explore creative and immersive worlds safely and without fear, and make friends along the way.”

“Going forward, we will continue to update you on the steps we’ve already taken (and what’s underway) to create safe game infrastructure, provide transparency, enforce safety and enforcement policies, and continue implementing robust tools and resources for players to have positive gameplay experiences,” Activision Blizzard concluded.

As aforementioned, the ADL released their “Hate and Harassment in Online Games 2022” report earlier this month. Claims include 77% of “adult online multiplayer gamers” having experienced harassment (out of a sample size of 100 million American adult gamers). This is a 6% rise from last year, and the fourth consecutive rise year-on-year. 10 to 12 year olds also saw a 70% increase.

It should be noted those figures are based on any form of harassment. For example, while any form of harassment against 13 to 17 year olds increased by 66%, just 46% had encountered trolling or griefing, while 30% or less had been called offensive names, suffered identity-based harassment (even of their avatar), or targeted harassment across multiple sessions. 

RELATED: Activision Announces New Code Of Conduct In An Effort To “Combat Toxicity” Across Various Titles In The ‘Call Of Duty’ Series

Harassment broken down by game was also shared, including the claim that 68% of 13 to 17 year old Final Fantasy XIV players had experienced harassment, and 61% of 10 to 17 year olds.

This could contradict how the game won multiple “best community” or “best community support” awards from Destructoid, The Game Awards, and the Golden Joystick; some multiple years running, including 2022. Suffice to say, discussions about the report among the playerbase on Reddit and Twitter have been met with skepticism and mockery.

Potential issues not withstanding, seven Democratic members of Congress co-signed a letter to major gaming companies, citing the report. Besides Activision Blizzard, this letter was also sent to Electronic Arts, Microsoft, Roblox, Sony, Tencent, Valve, Riot Games, PUBG Corp, Ubisoft, Square, Epic, Innersloth, and Take-Two Interactive.

Therein, the senators ask to “better understand the processes you have in place to handle player reports of harassment and extremism encounters in your online games, and ask for consideration of safety measures pertaining to anti-harassment and anti-extremism.”

They also ask for the size of moderation teams, what investments had been made, and if they would share reports on how player punishments on a routine basis. The latter has been requested by the ADL before.

The ADL ramped up targeting video games since 2020. Daniel Kelley — Assistant Director for their Center for Technology and Society — called on the industry to share their data about player harassment, which in turn would require policies and enforcement against hateful content “that are much more robust than they have now.”

NEXT: Science Fiction Author Jon Del Arroz Blasts ‘Warframe’ Developer Digital Extremes For Allegedly Banning Player Denouncing Pedophilic User Chat

Exit mobile version