The Swiss Army Knife Approach to AI
When people talk about AI in game art, the conversation often gets stuck on the model. Which model is best. Which one is the newest. Which one makes the nicest image. I think that misses the bigger opportunity.
What matters more is building a workflow that is model-agnostic by design. One where tools can be swapped, tested, combined or replaced without breaking the whole pipeline.
That is the mindset we have been exploring at Keywords Studios across a range of research and development projects.
We like to describe this as a Swiss Army knife approach to AI. Not because every tool should do everything, but because real production needs flexibility. Different tasks need different combinations of LLMs, 2D, video, audio and 3D tools, sometimes open-source, sometimes closed-source. The important thing is not loyalty to a model. It is building a system that stays useful as the technology changes.
What does this approach look like in practice?
Let’s take one of the most common AI art experiments as an example: character creation.
A modular AI workflow can start with just one image or a small set of images. But before you generate anything useful, you need high-quality input. That means curating and captioning the reference material properly, so the system understands what matters: the shape, the outfit, the materials, the style.
Then different tools handle different jobs. One helps shape the overall look. Another creates extra angles and views. Another generates poses and emotional expressions. Another helps add texture or movement.
This matters because most pretrained models come with their own built-in habits. They are not neutral. You might have heard of FLUX chin, a recurring facial trait that often appears in FLUX-generated faces. That may sound minor, but it becomes a real issue when you train on synthetic data, because small artefacts can start spreading through the whole dataset. That is why generating across multiple models matters. It diversifies and helps reduce the risk of all your training data inheriting the same visual quirks from one source.
We have gone through the same trial-and-error process with other pipelines too, from storyboarding to 3D creation and gameplay trailer generation. You can find the list of tools we tested down below. But even while I write this, it’s probably already 10 minutes out of date. That is exactly why the workflow matters more than the model.
How to set up model-agnostic workflows?
Make the production logic the stable part of the system.
At Keywords Studios, we think in terms of stable blocks of work that remain useful even if an AI model changes. These blocks contain the following information to make them transferable across different generations of models:
- First, a clear definition of done: this needs to be unambiguous and specific. Use measurable values like scale, shape, format, localization, folder, path etc.
- Second, a shared record of what the workflow already knows: that includes brand identity, brand rules, reference assets, training data, legal docs, version history and the decisions made along the way. This requires regular updating.
- Third, a clear process for how work moves forward and gets validated: Think about how your work is being done: Where are the handoff points? What can be validated automatically? Where does human review still matter most?
Stable blocks of work and model-agnostic workflows are the real opportunity for me.
Not chasing a single breakthrough tool, but building adaptable workflows that help teams create high-quality work, stay on brand and move faster without losing control.