Skip to main content
Back to previous page

Why the Video Games Industry Needs a Responsible Moderation Revolution

Author: Leah MacDermid, Trust & Safety Content Writer at Keywords Studios
Date Published: 17/04/2024
A image of a book

In the digital age, the importance of content moderation in fostering safe, healthy, and thriving online communities cannot be understated. But despite the consensus among both players and industry professionals that moderation is essential, the current but outdated model of content moderation on games platforms has proven insufficient. 

This system has allowed harmful content to proliferate, it has exposed content moderators to damaging material excessively, and it has left the most vulnerable users open to encountering child sexual abuse material (CSAM), extremism and hate speech, and content endorsing suicide and self-harm.

It's clear: we need a revolution — a transformation towards what Trust & Safety at Keywords Studios calls "Responsible Moderation."

The Flaws in Our Current Moderation Model

Every day, moderators sift through heaps of digital content, a task as never-ending as it can be psychologically traumatising. This relentless exposure to harmful content is unsustainable and unethical.

Meanwhile, instead of leveraging technology to proactively scan content before it’s posted live (an impossible task for moderators alone, who are only human) the industry has placed the responsibility on users to report toxic behavior, creating an environment where the most harmful content can slip through the cracks until it's too late. 

And it’s only getting worse. 

In their 2023 Toxicity in Multiplayer Games Report, Unity uncovered thatplayers who report witnessing or experiencing toxic behavior increased from 68% in 2021 to 74% in 2023. 

Similarly, at Keywords Studios, our moderators have seen a 168% increase in real-life threats over the last two years. 

Players have had enough, and they are looking to games studios to find a better way to moderate content. In the same Unity report, 81% of multiplayer gamers say that protecting players from toxic behavior should be a priority for game developers.

A better moderation solution isn’t just the right thing to do. It’s good for your business, too.

In fact, startling new research from Take This shows that 1 in 5 players spend less money in game spaces due to hate and harassment. 

Responsible Moderation — a radical rethinking of what it means to keep our interactive video game spaces secure — is Keywords Studios’ response to player requests for safe game spaces. 

It is about leveraging both AI and human intelligence (HI) in harmony, detecting real-life threats with precision and in a timely manner, all while shielding our moderators from the most harmful content on the internet. 

AI + HI = The Future of Collaboration

At the core of Responsible Moderation is the fusion of AI and HI. This combination significantly enhances the efficiency and accuracy of content moderation systems.

AI excels at scanning vast amounts of data quickly, identifying patterns, and flagging content that falls within predefined harmful categories. Where AI lacks nuance, human intelligence shines. Humans understand context, nuance, and the subtleties of language and behavior in ways AI currently cannot.

By leveraging both, we can create a system where the most egregious content is swiftly removed before it reaches users or burdens moderators, and only content that truly requires the human touch is escalated for review.

Some people in the industry are concerned that technology will advance so quickly it could replace human moderators. We don't share that concern. Humans will always be essential to moderation, because at its core, moderation is human. The nuanced nature of human communication and the evolving landscape of online interaction means that AI will never replace the need for human moderators. Beyond handling complex content, humans will always be needed to develop moderation strategies, advocate for essential safety measures, and train AI models. 

AI + HI is such a powerful moderation approach that we have united with technology providers Modulate and ActiveFence, and the research organisation Take This, to form The Gaming Safety Coalition. The coalition is dedicated to advancing the safety, integrity, and wellbeing of online games communities. Our first order of business as a coalition was to write the collaborative whitepaper The Future of Content Moderation in Gaming: A Unified Approach With AI and Human Touch. In it, we explore the benefits and challenges of combining artificial intelligence and human intelligence in your content moderation workflow, supported by a case study featuring Among Us VR. 

AI and HI – Bringing the Two Together to Safeguard Content Moderators

Real-Life Threat Detection and Escalation

The second pillar of Responsible Moderation involves the near real-time detection and escalation of content that represents real-world threats. This is especially important in the wake of legislation like the EU Digital Services Act and the UK Online Harms Act, both of which focus on removing illegal content in a timely manner. 

The surge in online threats like CSAM, extremist content, and self-harm or suicide content (whether the content endorses such activities or consists of users threatening to harm themselves) has amplified the need for timely identification and escalation to authorities. 

This is where the AI + HI approach becomes indispensable — and potentially lifesaving. 

Unlike human moderators who have limited capacity, AI can detect and flag this dangerous content in real-time, drawing attention to time-sensitive issues swiftly and efficiently.

With the assistance of AI, human moderators can concentrate on their crucial role — collecting details about the flagged content, confirming its potential harm, and escalating the matter to law enforcement and other authorities when necessary.

Along with employing technology to augment human work, platforms must build robust and battle-tested processes, establish relationships with law enforcement, and collaborate with organisations like NCMEC and INHOPE. 

At Keywords Studios, we are proud to share that, due to our expertise in escalating cases to the FBI, we have established a direct line of communication to their team that handles cyber cases. We believe that cultivating these relationships with law enforcement agencies is instrumental in creating safer video game communities, not to mention complying with legislation.

Moderators: The Secret Superheroes of the Internet

Perhaps the most transformative aspect of Responsible Moderation is the reframing of how we view and treat superhero content moderators.The role of a content moderator can be profoundly challenging and demanding. Moderators can be faced with graphic and disturbing content, which can significantly erode their mental health and overall wellbeing. In fact, recent studies have drawn parallels between moderators and first responders such as police officers and EMTs who regularly encounter indirect trauma.

Unfortunately, this reality has been ignored by too many for too many years.

Under Responsible Moderation, we see moderators for what they are — digital first responders. We recognise their analytical and linguistic skills, leverage their insights and expertise, provide robust wellbeing and resilience support, and equip them with AI-powered tools to pre-screen potentially harmful content, thus reducing their exposure to the most distressing material. This not only makes their work less psychologically taxing but also allows them more space to apply their human judgment where it is most needed.

Two men looking at a computer

A Responsible Moderation Call to Action

As video game platforms continue to grow and evolve (the Metaverse and VR pose new challenges that we must be ready to face), the stakes only get higher. The spread of dangerous and illegal content not only undermines the integrity of our game spaces but poses real risks to players' and moderators’ mental and physical well-being. It's a challenge that requires the collective effort of all of us — Trust & Safety professionals, game developers, content moderators, policymakers, technology providers, and players alike.

The journey towards Responsible Moderation will not be without its challenges. It requires a commitment to challenge previously held assumptions about the roles of technology and humans in moderation, an embrace of innovative thinking, and, most importantly, empathy and compassion. 

By adopting a Responsible Moderation approach, we can protect not only our players and communities, but also the superhero moderators who work tirelessly behind the scenes. 

This is not just a moderation revolution; it's a movement towards building a more inclusive, safe, and responsible internet for everyone.