Lessons from Project KARA
This article shares actionable lessons from Project KARA, a real-world R&D initiative using Generative AI (GAI) in game development, with insights for team leads aiming to replicate similar success.

We’re all feeling the pace of change right now. With new tools and techniques arriving rapidly, Research & Development (R&D) in game development has never been more critical.
Project KARA gave us a chance to really dig into what it takes to run a meaningful R&D effort inside a real production environment.
We used Electric Square’s ‘Detonation Racing’ as our playground. A live case study to test how GAI could slot into actual workflows. And while the tech was exciting, what we really learned was how to run R&D in a way that creates results people can build on.
Here’s what worked for us and what we’d do again.
1. Anchor the project in a clear, applied goal
Right from the beginning, we made a conscious decision: KARA wasn’t going to be a blue-sky innovation project. We didn’t want a slide deck full of theoretical possibilities. We wanted something you could play.
So, we gave ourselves a very real challenge: take an existing game, ‘Detonation Racing’ and remaster it using GAI across key pipelines such as 3D art, animation and lighting. That gave us a concrete goal to build towards and a clear finish line to aim at.
This focus kept us honest. Every experiment, event tool we tried, had to prove its value against a real production task.
Takeaway: Don’t keep your R&D in a sandbox. Tie it to something real and measurable. That’s the only way to understand the trade-offs and the value.
2. Structure around workstreams, not features
One of the early decisions that paid off for us was how we split up the work. Instead of organising around features or deliverables, we structured KARA into distinct workstreams.
Each team built its own GAI-infused pipeline independently, enabling fast, parallel iteration. This gave people room to focus, experiment, and iterate fast, without waiting on anyone else to move first.
Takeaway: Break your R&D down by discipline, not feature. Let teams go deep, own their outcomes and build in parallel.
3. Document every pipeline
One thing we got right was documenting everything. For every experiment we ran, we captured both the traditional (human-only) pipeline and the new GAI-infused version side by side.
That dual view was essential. It gave us a way to benchmark where we were gaining time, where quality held up (or didn’t), and where AI made a difference. It also made it crystal clear which steps still needed a human touch and which ones could be sped up without sacrificing control.
For character animation, AI mocap tools gave us a fast first pass but required human polish in Maya. Without that comparison, we might have overestimated how much time we were saving or underestimated the artist’s role in making it shippable.
Takeaway: Don’t just track your wins, track the process. Compare your AI workflows directly against the traditional ones. You’re not trying to replace people; you’re trying to give them better tools.
4. Start with manual iteration, then build tools
At the start, we kept things scrappy and that turned out to be a real advantage. We leaned on existing tools like Midjourney for mood boarding and Stable Diffusion to explore lighting ideas.
No custom setups, just creative misuse of what was already out there. But as we kept going, patterns started to emerge. We found ourselves doing the same things repeatedly.
That’s when we knew it was time to stop hacking and start building. One example: we turned a repeatable lighting workflow into a proper Unity plugin that let us use ChatGPT to configure directional lights, fog, and post-process effects based on a reference image.
Takeaway: Don’t over-engineer at the start. Play around, spot what sticks, then turn your scrappy wins into scalable tools.
5. Use cost and time KPIs, not just visual fidelity
It’s easy to get wowed by visuals, especially when you’re working with shiny new AI tools. But we knew from the start that this wasn’t enough. We had to track what mattered to production: time, effort, and cost.
So, for every pipeline we tested, we looked beyond just how the output looked:
- How long did it take to get a first usable result?
- How complicated was the setup?
- How much cleanup or editing was needed afterward?
- Does the fidelity hold up under real scrutiny?
- What was the cost in tools and people’s time?
That gave us a much clearer picture of real value. In one case, the GAI-assisted lighting setup cut down manual config by 78%. In another, we took a debris modelling task that normally took 8 hours and got it down to 2, just by layering AI into the workflow.
Takeaway: Define and track operational KPIs. “Looks better” is subjective, time saved is not.
6. Design for Human-AI collaboration, not replacement
Let’s be clear: AI didn’t replace anyone on the team. It didn’t write our briefs, make final design calls, or polish animations to shippable quality. What it did do was take the boring, repetitive parts off people’s plates, so they could focus on the good stuff.
Often, AI acted like a junior teammate, throwing out ideas fast, helping us explore options, and speeding up early feedback loops. But the human eye, taste, and experience still made the final call.
Takeaway: Don’t expect AI to replace your team. Give it the grunt work and let your people do what they do best: create, curate, and craft.
Final thoughts
Project KARA wasn’t just an experiment with AI. It became a blueprint for how to run R&D that delivers. We didn’t just tinker for curiosity’s sake. We built a playable product, built on a foundation of applied R&D.
To summarise, this is what worked for us:
- Set a clear, production-facing goal
- Break the work into domain-specific pipelines
- Compare traditional workflows with AI-infused ones
- Document everything and share it as you go
- Track real KPIs like time, cost and iteration cycles
- Automate after you understand the value
- Treat AI as a collaborator, not a replacement.
The future of R&D is applied, not abstract. With the right structure, the benefits of your experiments won’t just generate insight.
They’ll generate results.