Ubisoft And Riot Games Announce Joint Project To Preemptively Stop “Disruptive Behavior” With Trained AI

Arno Dorian slices the throat of a french guard via Assassin's Creed Unity (2014), Ubisoft
Arno Dorian slices the throat of a french guard via Assassin's Creed Unity (2014), Ubisoft

Ubisoft and Riot Games have announced their “Zero Harm in Comms” project; preemptively stopping disruptive behavior with the help of AI.

Ubisoft and Riot Games swear their "blueprint" may help prevent "disruptive behavior" in games

RELATED: Activision Announces New Code Of Conduct In An Effort To “Combat Toxicity” Across Various Titles In The ‘Call Of Duty’ Series

Both companies released their own statements. “Ubisoft and Riot Games have joined forces for a new research project in the pursuit of making online video game spaces safer,” prefaced the statement issued by Ubisoft.

“Removing ‘harm’ – a complex word with many tendrils of meaning – in video game chats is crucial and ongoing work, and their project, Zero Harm in Comms, aims to ultimately develop a more nuanced and robust framework than anything else available to do just that. It’s an important goal, and one both publishers recognize can only be achieved as a team,” the developer admits.

A selection of Little Legends sit in class with varying degrees of attention via Teamfight Tactics (2019), Riot Games

Ubisoft stated, “At its core, the Zero Harm in Comms research project aims to create a shared database of anonymized data, used to train Ubisoft and Riot’s systems to detect and mitigate disruptive behavior.” The idea arose from both Yves Jacquier (Executive Director for Ubisoft’s La Forge R&D Department) and Wesley Kerr (Head of Tech Research at Riot Games). 

Regarding AI, deep learning, and related innovations, Ubisoft declared, “While bonding over shared interests and specifically shared challenges, it became clear that harmful content is a massive one, and both were unsatisfied with the solutions currently available.”

Kali snipes through a small hole in the wall via Tom Clancy’s Rainbow Six Siege (2015), Ubisoft

“We cannot solve it alone,” Jacquier admitted. “We want to build the framework for this, share the results with the community, see how it goes, and bring in more people. […] We agreed that the solutions that we can use today are not sufficient for the kind of player safety we have in mind for our players.”

Kerr added, “We really recognized that this is a bigger problem than one company can solve. And so how do we come together and start getting a good handhold on the problem we’re trying to solve? How can we go after those problems, and then further push the entire industry forward?”

Ziggs gleefully blows up a forest via League of Legends (2009), Riot Games

RELATED: Head Of Xbox Studios Says Developer Behind ‘Fable’ Reboot Is Aiming To Bring The Series Forward “For Today’s Sensibilities”

“Gathered from various chat logs in Ubisoft’s and Riot’s portfolio of games, the data – strings of text – are scrubbed clean of Personally Identifiable Information (PII) and personal information,” the developer clarified.

Ubisoft then claimed, “They are then labeled by behavior – is this totally neutral, for example, or is it racism, or sexism? and used to train automated systems to better recognize and parse such harmful behavior from the jump.”

A crew fends off a pirate ship attack via Skull and Bones (2023), Ubisoft

“The key to the project lies within the sheer volume of data the duo is attempting to gather. With more data, these systems can theoretically gain an understanding of nuance and context beyond key words,” they proposed.

Jacquier noted, “There are key words that can be immediately recognized as bad. However, it’s often much trickier to parse. For example, if you see ‘I’m going take you out’ in a chat, what does that mean? Is it part of the fantasy? If you’re playing a competitive shooter, it might not be a problem, but if it’s another type of game, the context might be totally different.”

Annie walks suspiciously away from a blazing fire via Legends of Runeterra (2020), Riot Games

“Of course, it’s impossible to teach AI every possible harmful scenario, but the duo explain that the goal is to fine-tune their systems to look for these specific examples and detect them with high probability,” Ubisoft pointed out. “It’s a start, and one that both companies want to be extremely visible to their players to encourage a more welcoming gaming experience.”

Kerr went on to assert, “We want players to know we are taking action on this. That visibility and that communication with the player is going to be critical for them to understand that this is happening in the background. They may not care how it’s happening; they just want to know that things are improving, and things are getting better.”

A huge group of cyclists ride down a forest course on BMX bikes via Riders Republic (2021), Ubisoft

“Jacquier and Kerr have been working on the Zero Harm in Comms project for roughly six months, and plan to share their learnings and potential next steps with the broader industry next year,” Ubisoft revealed.

“Both agree that creating a safer online environment is crucial in an age where everyone is online, and both reiterate the hope that more publishers will come abroad to move beyond the keyword model that has proven insufficient for so long.”

Jacquier added, “It’s 2022. Everyone is online, and everyone should feel safe, period.” Kerr then noted, “This is a great first step and a very large task. We don’t want to go at it alone.”

YouTuber IShowSpeed infamously rants at another player via YouTube

RELATED: Blizzard Entertainment Will Record Voice Chats In ‘Overwatch 2’ And Require Players Provide Phone Number To Prevent “Disruptive Behavior”

The statement issued by Riot Games echoes many of the same points tackled by Ubisoft, further elaborating on this partnership.

“Riot and Ubisoft are teaming up on a tech partnership to develop a database gathering in-game data to better train AI-based preemptive moderation tools that detect and mitigate disruptive behavior in-game,” Riot Games declared, mentioning the preemptive element that wasn’t present in Ubisoft’s statement.

“As members of the Fair Play Alliance, both companies believe that improving the social dynamics of online games will only come through communication, collaboration, and joint efforts across the gaming industry,” the developer insisted. The Fair Play Alliance includes Amazon Games, Blizzard Entertainment, Epic Games, Discord, Twitch and more.

NOMAD get into a firefight at a checkpoint via Tom Clancy's Ghost Recon Breakpoint (2019), Ubisoft

“With Ubisoft’s wide catalog of popular games and Riot’s highly competitive titles, the resulting database of this partnership should cover a wide range of players and use cases to better train AI systems to detect and mitigate harmful behavior,” they explained in the statement.

Speaking to PC Gamer, Jacquier and Kerr discussed other aspects of the plan. This included how it came together after finally being able to meet following the COVID-19 pandemic as well as honoring data protection laws to ensure only the absolutely necessary data was utilized. The pair also deferred to the Fair Play Alliance’s definition of “disruptive” behavior, which includes hate speech and grooming.

Neon enjoys her new room via Valorant (2020), Riot Games

The pair were asked “What is preemptive in this circumstance? The banning of a player as they progressively get more toxic or just removal of messages before they happen?” Neither Jacquier nor Kerr really answered the question, however.

“What we want to reach is a situation where any player from any culture, from any age, from any background, in any game has a safe experience,” Jacquier led. “That’s really what we want to aim for. How we get there, there’s no silver bullet. It’s a mix of many different tools.”

Division Agents fend off the Hyenas in a foundry via Tom Clancy's The Division 2 (2019)

“We count on the community, we count on promoting positive play, we count on the supporting team, customer support and everything,” Jacquier rambled. “And we count on such prototypes. Not, talking only about the prototype, it all falls down to what will be the results will be reliable enough to simply delete a line because we’re confident enough that it doesn’t work and tag the player with whatever rules.”

“We don’t know yet,” the Ubisoft La Forge executive director relented. “It’s way too soon, what we want you to do is to make the tool, a tool that is as reliable as possible and then see what’s the best usage of this tool in the entire toolbox.”

The Star Guardians stand triumphantly via League of Legends (2009), Riot Games

“Yeah, I think that’s exactly it and want to double down on it is that the outcome of this is we’re able to detect these things far better,” Kerr continued. “How we or how our product teams choose to integrate that into the system to protect the players, they’ll work on different features and teams.”

The Head of Tech Research at Riot Games concluded, “But I think using the AI as a super strong signal that they can trust and rely on to actually take action is going to be the key to being preemptive.”

NEXT: Square Enix President Declares The Company Needs To Target The Global Market To Succeed: “Japanese Market Is No Longer Sufficient”

Mentioned This Article:

More About: