Skip to Content

Do we know what our app is supposed to do?

Bart Vanparys
June 08, 2020

Our success depends on the systems that we build and run. So, we need to be confident that these systems do what they should do.

We explore, we design, we plan, we build, we test, we operate. All this with this objective in mind: ensure that our solutions do what they should do. But knowing “what the system should do” is not always straightforward.

The first step is that we need to remember what it was that the system is supposed to do.

We create new features in a creative burst of exploration, design, and building activities. During the iteration, we communicate intensively within the team. This allows us to ensure the quality and value of the increment. And then we start the next cycle and the next…

Of course, our users rely on the features that we built in earlier cycles. So, we need to ensure that we don’t break them by adding or changing features. Therefore we do regression testing. But, the sprints in which we built those features seem already a long time ago. And the information about these features (that was so clear when it was discussed with the team at the time) might not be available anymore.

We risk forgetting what the system is supposed to do. This is especially true for complex systems with many features and moving parts. It becomes especially problematic if people are leaving the team and with them the knowledge of the features.

How do we keep this knowledge? Without going back to creating extensive requirements and design documentation? They take a lot of time to produce, are often outdated very fast and are very hard to maintain. Alternative approaches are:

  • Set up a tool-based requirement management system in which information about features and system capabilities can be managed. Ideally, this is integrated with tooling to link them to (regression) test suites
  • Explore graphical representations of the system and its features. This includes architecture diagrams but also mindmaps and process diagrams. An image does sometimes save 1000 words…
  • Model-based design with dedicated tools that focus on the core features and system logic. These models make a change impact analysis much easier. And after updates to the model, documentation can be generated. Through model-based testing, we can even generate new test scenarios automatically reach adequate test coverage
  • Consider using the Regression Test Suite as a reliable oracle about what the system is supposed to do. Especially through Acceptance Test Driven Design, the test scenarios can grow to be an ideal way to express the expected system features and behavior, supported by examples.

“What the system is supposed to do” is no longer stable and predictable

A second problem is that our apps might be used in different and unexpected ways in real life. Actual users will find creative ways to use the features that we build. These features will be integrated and combined with other systems and services to support new business operations. Our initial user stories (and tests) do not cover anymore how the system is actually used.

If we break this, users will complain. Even if all our initial user stories still work, users will experience this as a quality degradation.

So we’ll need to learn about how “what the system is supposed to do” changes under real usage. This need is increasing as we include machine learning and similar features where it is becoming increasingly difficult to dictate what the system is supposed to do, in advance. The system is supposed to learn and adapt to do things in new and better ways.

Learning about how our system is being used, requires a set-up of analytics. Gathering information about actual usage patterns, feature usage statistics, patterns in production data…

This information complements our knowledge of the initial intent of the system with actual usage. It can be fed into the models that we created. It can also be used to strengthen our regression test suite.

Interesting initiatives are being done to feed production analytics to drive updates to existing automated test suites. Through combinations of analytics and AI, existing test automation is adapted in line with actual usage patterns.

A much higher degree of continuity and assurance that “the system does what it’s supposed to do” is the reward.

About the author

Director | Belgium
Bart graduated as Commercial Engineer (KU Leuven, Belgium) and has worked in IT consultancy since 2000. In 2011, he joined Capgemini Belgium where he took a lead role in building the Testing & Quality Management practice. He is currently supporting organizations in building testing & quality engineering capabilities.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *