Blocks
Combi Grid Block
Thumb Grid - White





Thumb Grid
















了解各地工作室
Large Feature Content
Award Grid Block
最佳质量保证与本地化服务提供商
年度最佳公司
最佳配音
优秀服务提供商
最佳工作室
General Enquiries
Masonry Grid Block
Feature Grid Block
创作
全球化

Ashley Liu


Bertrand Bodson


Dan McCormick


Elodie Powers


Frederic Arens


Joe Binnion


Jon Hauck


Andrew Kennedy


Nicolas Liorzou


Rob Kingston


Romina Franceschina


Rhonda Cottingham


Tony Grigg


Trina Marshall

这次收购是QA工程和自动化的进身之阶。
效益
与ArenaNet的合作为Keywords的FQA团队进入西雅图地区迈出了第一步。
西雅图的Keywords团队目前位于Keywords Studios质量保证工程、自动化和工具构建的最前沿。
Tagline
Text and Media text right
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis.

Tagline
Text and Media background
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat.
Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis.

Keywords Studios解决方案
这种安排为Gearbox提供了在其他地方无法获得的价值。通过与包括曼德沃克和Lakshya Digital在内的亚洲Keywords美术工作室合作,Liquid能够迅速提升其服务水平,显著增加工作量,同时降低成本,并减少了Gearbox支持工作所需的内部美术总监数量。此次合作还促使Gearbox得以完善其内部系统。通过将Liquid高度专业化,同时把经验丰富的专业美术师和经理“引入他们的团队”,Gearbox获得了提升整体制作水平的新方法。
Text block
Lorem | Lorem | Ipsul Dolor Sit |
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis. | Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, rdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis. | Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis. |
Large Feature Content
Text List block - vertical
Text List block
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis.
Text List block
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis.
Text List block
Lorem ipsum dolor sit amet, consectetur adipiscing elit. In sed augue a urna viverra vestibulum ut a sem. Cras tempor nibh purus, nec faucibus ligula hendrerit at. Integer turpis urna, aliquet elementum nibh vel, porttitor mollis ante. Integer et massa vitae massa scelerisque volutpat ac vitae ligula. Aliquam erat volutpat. Nulla facilisi. Nullam euismod lorem ac maximus interdum. Morbi arcu neque, posuere at risus vitae, bibendum egestas erat. Integer nec varius felis.
File block
Lorem ipsum dolor sit amet
filter title
Real-Life Threat Detection and Escalation
The second pillar of Responsible Moderation involves the near real-time detection and escalation of content that represents real-world threats. This is especially important in the wake of legislation like the EU Digital Services Act and the UK Online Harms Act, both of which focus on removing illegal content in a timely manner.
The surge in online threats like CSAM, extremist content, and self-harm or suicide content (whether the content endorses such activities or consists of users threatening to harm themselves) has amplified the need for timely identification and escalation to authorities.
This is where the AI + HI approach becomes indispensable — and potentially lifesaving.
Unlike human moderators who have limited capacity, AI can detect and flag this dangerous content in real-time, drawing attention to time-sensitive issues swiftly and efficiently.
With the assistance of AI, human moderators can concentrate on their crucial role — collecting details about the flagged content, confirming its potential harm, and escalating the matter to law enforcement and other authorities when necessary.
Along with employing technology to augment human work, platforms must build robust and battle-tested processes, establish relationships with law enforcement, and collaborate with organisations like NCMEC and INHOPE.
At Keywords Studios, we are proud to share that, due to our expertise in escalating cases to the FBI, we have established a direct line of communication to their team that handles cyber cases. We believe that cultivating these relationships with law enforcement agencies is instrumental in creating safer video game communities, not to mention complying with legislation.
Real-Life Threat Detection and Escalation
The second pillar of Responsible Moderation involves the near real-time detection and escalation of content that represents real-world threats. This is especially important in the wake of legislation like the EU Digital Services Act and the UK Online Harms Act, both of which focus on removing illegal content in a timely manner.
The surge in online threats like CSAM, extremist content, and self-harm or suicide content (whether the content endorses such activities or consists of users threatening to harm themselves) has amplified the need for timely identification and escalation to authorities.
This is where the AI + HI approach becomes indispensable — and potentially lifesaving.
Unlike human moderators who have limited capacity, AI can detect and flag this dangerous content in real-time, drawing attention to time-sensitive issues swiftly and efficiently.
With the assistance of AI, human moderators can concentrate on their crucial role — collecting details about the flagged content, confirming its potential harm, and escalating the matter to law enforcement and other authorities when necessary.
Along with employing technology to augment human work, platforms must build robust and battle-tested processes, establish relationships with law enforcement, and collaborate with organisations like NCMEC and INHOPE.
At Keywords Studios, we are proud to share that, due to our expertise in escalating cases to the FBI, we have established a direct line of communication to their team that handles cyber cases. We believe that cultivating these relationships with law enforcement agencies is instrumental in creating safer video game communities, not to mention complying with legislation.
Moderators: The Secret Superheroes of the Internet
Perhaps the most transformative aspect of Responsible Moderation is the reframing of how we view and treat superhero content moderators.The role of a content moderator can be profoundly challenging and demanding. Moderators can be faced with graphic and disturbing content, which can significantly erode their mental health and overall wellbeing. In fact, recent studies have drawn parallels between moderators and first responders such as police officers and EMTs who regularly encounter indirect trauma.
Unfortunately, this reality has been ignored by too many for too many years.
Under Responsible Moderation, we see moderators for what they are — digital first responders. We recognise their analytical and linguistic skills, leverage their insights and expertise, provide robust wellbeing and resilience support, and equip them with AI-powered tools to pre-screen potentially harmful content, thus reducing their exposure to the most distressing material. This not only makes their work less psychologically taxing but also allows them more space to apply their human judgment where it is most needed.
A Responsible Moderation Call to Action
As video game platforms continue to grow and evolve (the Metaverse and VR pose new challenges that we must be ready to face), the stakes only get higher. The spread of dangerous and illegal content not only undermines the integrity of our game spaces but poses real risks to players' and moderators’ mental and physical well-being. It's a challenge that requires the collective effort of all of us — Trust & Safety professionals, game developers, content moderators, policymakers, technology providers, and players alike.
The journey towards Responsible Moderation will not be without its challenges. It requires a commitment to challenge previously held assumptions about the roles of technology and humans in moderation, an embrace of innovative thinking, and, most importantly, empathy and compassion.
By adopting a Responsible Moderation approach, we can protect not only our players and communities, but also the superhero moderators who work tirelessly behind the scenes.
This is not just a moderation revolution; it's a movement towards building a more inclusive, safe, and responsible internet for everyone.
