Skip to Content

The TestExpo VIP StockHolm – Close Up

Dec 3, 2024
Fredrik Scheja

This blog series summarizes the TestExpo VIP event that took place in a movie theatre in Stockholm on October 10th, as well as the reflections and discussions that resulted from the event. The event was divided into four main sections – Behind the scenes, Close Up, Special Effects and Blockbuster, all inspired by the film culture of the venue:

In this part of the event, I have the pleasure of speaking with Terese Östberg, test lead at the Swedish Transport Administration (Trafikverket). Together, we explore various perspectives on her role as a test lead for a team working on systems related to forecasts and traffic information for train traffic in society.

Essentially, Terese’s work involves providing valuable information about the system to support various development-related decisions. To accomplish this, Terese heavily relies on test automation to enhance her testing capabilities.

However, Terese explains that she doesn’t see herself as a technical person. The test automation first came into play thanks to the empathy within the development team. The developers understood that it wasn’t feasible for Terese to work in a windowless basement to test the system in the only available testing environment.

As a result, a framework was set up around the system using SpecFlow, which enabled thousands of automated tests written in the Gherkin scripting language to be implemented. Following BDD (Behavior-Driven Development), these high-level scripts, which describe the system’s behavior in plain language, automatically test the system. These documented tests benefit both as a representation of the system’s capability that is understandable to humans and as machine-readable, testable code.

This approach turned out to be successful, as Terese explains that the challenge with testing is getting everyone to understand the system’s multifaceted and complex behavior. This becomes a challenge as systems grow more complex, and because people around the system are naturally different, “inherently irrational and uniquely strange.” The challenge is to help everyone reach a basic understanding. The tester becomes a key figure in making the system comprehensible and manageable so that meaningful discussions can take place.

I’ve also experienced difficulties in my career with helping people involved in system development understand the relevance of different types of information. For example, error reports that don’t hit the mark are often met with the response, ‘Yes, that’s how we chose to build the system’ — even when it clearly impacts end-users in a negative way.

Testing a system involves bringing up various perspectives, different ways of looking at the system, so we can use this information to make informed decisions. But if we can’t convey different types of reports in a way that is according to the sense of coherence – meaningful, understandable, and manageable – to the recipient, we miss the opportunity to truly connect with our narratives about the system.

An example script Terese shows involves a train indicating it’s a few minutes late and how the system should handle generating a new forecast. If we can’t weave this into a narrative that we as a group deeply understand, there’s a risk of losing the opportunity to maintain constructive discussions about what this perspective means for the system and its users as we continue our development further.

Considering the complex nature of modern systems, it’s no longer sustainable for one person to understand ONE test script that applies to ONE isolated part of the code realizing just that ONE capability. Ola Berg highlights in his upcoming book that ‘source code is decision-making’. The code building a system is interwoven in complexity, and it’s no longer feasible for a single person to retain it all within their limited cognitive capacity.

Therefore, we need to focus on our collective ability to view the system from different perspectives, discuss interpretations of the system, and accept that it’s okay to embrace each other’s viewpoints. This approach is very similar to how people discuss interpretations of art, where we gain meaningfulness, comprehensibility, and manageability both as individuals and as a group through sharing perspectives with each other. A meaningful narrative unites us as a group and provides us with a deeper understanding.

However, we are not trained to think or express ourselves this way. This is reflected in my conversation with Terese during the event, which might seem chaotic and unprepared. Yet, it also demonstrates how we can start moving towards these types of conversations, which are necessary to train our abilities – skills that won’t develop without focused practice.

Describing a system’s capabilities and behaviors in test scripts can be viewed as representing the system, perhaps in a way that mirrors protagonist-antagonist narratives. This approach gives people working with the system a sense of connection, describing how the system can provide deeper meaning and comprehensibility in a complex world without oversimplifying it, while also making the situation manageable, so we can act on the information available without becoming paralyzed.

Recently, there has been a lot of hype around AI and its potential to further enhance our ability to solve complex development tasks. Can AI help us write meaningful test scripts and ultimately create narratives about the system’s ability to generate value?

Cecile shares that, as a creative writing instructor at Linnaeus University last summer, she noticed that several submitted texts felt hollow and devoid of deeper meaning. This raised suspicions that these might have been machine-generated and could be recognized by an experienced author as texts lacking deeper meaning and the “human imperfection” that is clearly missing in current AI.

Terese points out that she finds it unlikely that an algorithm could perform her job since it heavily involves handling “irrational people behaving uniquely strangely,” which is how she interestingly describes “human imperfection.” Terese feels that the way they work as a team just feels logical and right because these uniquely strange qualities of individuals are embraced and respected, allowing everyone’s perspectives to be valued within the team’s collective perspective.

But how far has AI developed in this area? Can AI replicate the “humanly imperfect”? Does AI have the capacity to generate value around emotional intelligence in the future?

Watch the entire CLOSE UP part of the event here (YouTube.com)

About the author

Consultant | Sweden
Testing is the acts we do to ease our curiosity while we develop the things we love. I create models that enables faster understanding of complex matters for better judgments on our journey towards authenticity.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit