This blog series summarizes the TestExpo VIP event that took place in a movie theatre in Stockholm on October 10th, as well as the reflections and discussions that resulted from the event. The event was divided into four main sections – Behind the scenes, Close Up, Special Effects and Blockbuster, all inspired by the film culture of the venue:
How has the game development industry approached this topic? Theoretical design models have long been established that focus on Design, Dynamics, and Gameplay Experience (Article on MDA and DDE). The game industry has long worked towards evoking the right emotions in players. Even though it may focus on emotions like excitement or fear in games, the development sector aims to elicit emotions as well, although it may focus on feelings of safety and trust. An example is Region Västerbotten’s vision to develop IT systems that generate “More Time for Care,” which connects to the feeling of being close to the patient to provide the right treatment.
I had a conversation with Jens Bergensten, the lead designer at Mojang for Minecraft, where he shared an interesting perspective. He mentioned that they consider themselves “data-informed” rather than “data-driven.” As Jens explained:
“…even if we know that 80% of our players are right-handed, we can’t optimize the game solely for them; it would make us become a game for only right-handed people.”
It’s interesting how Jens describes their approach to data, preferring not to be bound by data. But I find it also intriguing how he speaks about the system, the development team, and the users as a “us” — “it would make us become a game.” How many people in the development sector see themselves as ‘one’ with the system that encompasses development team, code, and users?
Listen to Jens Bergensten’s talk on Swedish Radio (Swedish only) about how the game industry handles feedback from users and the challenge of finding the balance in what “feels right”:
“…the exchange with the audience is valuable, but in the end, you have to choose a path — and stand by it.”
In this discussion, AI experts Karl Fridlycke and Karl Kardemark from Sogeti join the conversation. Fredrik shares a story about a tester’s disappointment when an AI model created incorrect output while creating a test script from a large set of use-case documents.
AI is based on statistical models, which sometimes produce errors. AI is not an absolute truth-teller and is known to “hallucinate.” Karl Kardemark explains that a significant challenge in developing AI-based functions is limiting hallucinations, and the industry is still in an early stage in this regard.
Karl Fridlycke adds that AI-induced hallucinations can on the other hand be beneficial when it might sparking new thoughts and allowing us to view the problem from a different perspective – and in the end even serving as a driver for innovation. When preparing a system for future challenges we want to test it with a high variance of stimuli and this is perhaps where hallucinations can be advantageous – such as when generating test data.
I give an example I’ve heard about testing autonomous vehicles by presenting an unexpected situation — like a goat on a moped — and then observing in a safe, simulated environment how the algorithm handles the unexpected scenario. This can, in turn, generate an unexpected reaction from the algorithm, deepening our understanding of its reasoning.
Jesper Thureson from Tricentis, which develops testing platforms and tools for testers and developers, states that testing in the industry, despite years of focus on test automation, remains slow. The intention of Tricentis’ platforms is to accelerate the feedback needed to deeply understand our systems. To make this possible, Tricentis offers various AI-powered features to enhance our ability to test more efficiently and responsibly, with greater awareness of how we handle customer data.
Jesper believes that the future for testers will involve a role similar to that of a supervisor for autonomous AI agents investigating the system and its capabilities. This role will require an intense human domain knowledge about the system to guide these agents appropriately, similar to how we work with LLM models today — the better the questions we ask, the better the answers we get.
But in this complex dialogue between humans and machines, both human imperfection and machine-induced hallucinations will inevitably cause disruptions in this cycle. From an engineer’s perspective, this will of course feel frustrating – while from an artistically creative perspective it is precisely this that will make the work interesting. This duality will become more and more prominent the more complex our system development tasks become – much like observing Karl and Karl, where one sees AI hallucinations as harmful, and the other sees something positive emerging from them. They respect each other’s perspectives during their discussion and try to understand the other’s argument, knowing that listening to the other side does not mean automatically abandoning one’s own position. The one doesn’t have to be right, and the other wrong – rather, this difference in approach is something positive, and the conversation, a tension, can in itself strengthen our understanding as a group.
See the full SPECIAL EFFECTS section of the event here (youtube.com)