Moving beyond the hype in AI and machine learning?


Expectations are high, but application is yet to come to fruition. We’re talking about artificial intelligence (AI) and machine learning, as discussed in the 2020-2021 World Quality Report from Capgemini and Sogeti, in partnership with Micro Focus, published on November 5, 2020.

There’s a general buzz of excitement at the potential for using AI and machine learning in quality assurance (QA) and testing, just as there was last year. Yet, while our WQR survey findings reveal some evidence of supervised learning as a core part of machine learning (ML) in making quality engineering smarter, we’re not seeing the required maturity to show visible results.

Several questions arise for those of us watching the evolution of AI and ML in quality assurance. For example, are we using it as a tool to do something we already do, only better? Or to change what we do altogether. Or simply to carry on as we were, but using a machine instead of a human – in which case, where is the value?

Yet, despite these questions the future looks bright for AI and ML in this area. Almost nine out of ten respondents (88%) in this year’s WQR survey said that AI was now the strongest growth area of their test activities. Looking ahead, we see a primary goal to be that of avoiding defects before they even occur. Think about it. The ability to prevent a defect without having to run tests in the first place. That’s smart.

AI and ML use cases

Currently though, use cases include things like automated root cause analysis, with 58% of this year’s WQR survey respondents saying it was extremely or highly relevant. Having said that, we’re inclined to think this is more of an aspiration than that they’re actually applying it for this purpose at present.

While new use cases for AI and ML are only now emerging, some organizations are ahead of the game. We cite one multi-national bank that has been using machine learning for analysis on customer usage, seeing which features are working best for people. That knowledge is then being fed back into the bank’s development strategy.

Elsewhere, we’ve seen organizations running analytics on production incidents and run-time application logs both to conduct a deep intelligent what-if analysis and to predict future quality, as well as to prescribe necessary development and testing activities.

And, of course, there’s the use of AI for the generation and management of test data. Here we see it being used to identify test coverage gaps compared to real user experience patterns. It also supports regulatory compliance and ethical use of data when it’s used to create synthetic data, for example to comply with GDPR data privacy rules.

Testing ‘of’ or ‘with’

In assessing the state of AI and machine learning in quality assurance, one question that keeps coming up is whether we are using it as a tool to aid QA and testing, or whether we’re assessing the QA of the intelligent machine itself. There is a significant difference between the two. It is particularly difficult to assess the QA of AI, especially when it is continually learning, because you don’t know what the expected outcome is. And, as we point out in the report, there are challenges on the holistic coverage of AI systems – for instance, bias in AI.

In general, we expect the benefits to accrue initially when AI and ML are used as a tool to aid QA. For example, to predict patterns, identify the behaviors of coders (not as ‘big brother’ as that might sound), and find indicators of good code and bad code.

Changing skills

Finally, as with any new or emerging technology, ensuring you have the skills to maximize its value is a challenge. In the case of AI, it’s not just about what the technology can do, but about how it can be incorporated in the overall software development lifecycle. This is something to watch going forwards. And it’s interesting to note a divergence in how AI and ML change the skills needed from QA and test professionals in different countries covered by the report. For instance, the greatest overall area of need this year was identified as software development engineering testing skills (S-DET), mentioned by over a third (34%) of respondents. However, in the Netherlands, it was an issue for only 5% of respondents, while in the UK, Belgium and Luxembourg, the figures were over 70%.

Get in touch

If you’d like to hear more about our findings relating to AI and machine learning in quality assurance, please get in touch with me or Andrew Fullen.

Rik Marselis


Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He has assisted many organizations in improving their IT-processes, in establishing their quality & testing approach, setting up their quality & test organization, and he acted as quality coach, qa-consultant, test manager and quality supervisor. Rik uses his more than 40 years of experience in systems development and quality and testing to bring fit for purpose solutions to our clients. He focuses at three major tasks: * Consultancy on Quality engineering & Testing in the broadest sense (quality & test policy, project startup, process improvement, coaching, second-opinions, etc…) * Develop and give training courses for both novice and experienced testers (Rik is an accredited trainer for TMAP, TPI and ISTQB certification training courses) * Research and development of the quality engineering & testing profession. Rik has contributed to over 20 books on quality and testing, of which 5 as an main author and 5 as project leader. His most recent book in the TMAP body of knowledge is “Quality for DevOps teams”. Rik is a much-appreciated keynote-speaker and workshop-host at conferences (he has presented at conferences in over 15 countries).

More on Rik Marselis.

Related Posts

Your email address will not be published. Required fields are marked *