Whenever anyone asks me to explain Cognitive QA and the tools we’re developing at Sogeti, it makes me think about the discovery that, over the past 20,000 years, the average volume of the human brain has decreased from 1,500cc to 1,350cc. Professor Gerald Crabtree, paleoanthropologist John Hawks and Cognitive Scientist David Geary are amongst the scientists who’ve written research papers on the subject. There has been much debate in the scientific world, with little final conclusion, about whether this means our intellect is in decline and we are entering an “Idiocracy” (like the movie) or if it doesn’t, in fact, impact on our intelligence at all.
Whatever the truth is, one thing is for sure, technology has advanced to such a degree that, throughout the development lifecycle we are consistently being required to do more with less and do it faster, way beyond the capability of the human brain. This phenomenon has led to an explosion in the popularity of AI and Cognitive tools. We’ve moved fairly quickly from a world where, as testers, we were working on 2-year projects and monolithic 3-month releases, to a requirement for releases every week or even every day.
Once there was time for a test lead to look at numbers and data and make educated guesses based on experience and trends, but we no longer have that luxury. With the advent of continuous testing, the time to think has evaporated and our human brains need the help of Artificial Intelligence, Cognitive tools, and Automation, and these tools need to be constantly updated to support our decision-making. At present these tools are not “AI” in the pure sense of the word, they are built on very clever algorithms, routines and applications and, at Sogeti, we prefer to use the term “Cognitive QA”. Based on 50 years of Testing and QA experience we input the patterns, processes, and trends into machines which can see new patterns emerging that the human brain fails to see; machines that don’t get derailed by shiny distractions or make errors in the same way that a person might if repeating the same monotonous task over and over.
We’ve most recently been working with Amazon Alexa’s open source API to optimise our testing. For example, it’s often the case that I need to run a regression pack tonight and I know the last one took 8 hours, but I only have 6 hours to complete the task. In these circumstances, I can instruct our cognitive system, with Alexa as the audio trigger, to prioritise and optimise the most relevant and valuable tests and compress the run to 6 hours, without diminishing its efficacy. As the development began in our Spanish offices and the tool is still in the development and testing phase, I’ve had to develop (a still rather unconvincing) Spanish accent but, that aside, the other beauty of this cognitive tool is that I can ask it to run a task without having to stop the equally important task that I’m working on. I’ve grown really quite fond of this Cognitive QA tool and caught myself, just this week, wondering if I could find a rather fetching fluffy onesie for it on the interwebs.
Barry’s Bionic Eye for Detail
Sogeti is currently providing several clients with an end-to-end Managed Test Service. This means I need to review several test strategy drafts, from several different personnel. By hand, each one takes 3-4 hours. I found that there were a lot of common challenges with the first drafts that I was reviewing, so I decided to introduce an automated initial check of the drafts, prior to my critical intervention, to save time and ensure consistency and quality. The Cognitive QA tool that I’ve developed performs a series of common checks, makes simple recommendations to the author and emails them over to be implemented before the strategy is returned to me for my personal review and comments. Some of the scores the tool uses include:
- Reading complexity based on 200 words per minute to determine if the first draft is too large and wordy to present to the client and test team.
- Essential keywords that should appear such as “risks”, “issues” and “non-functional testing”.
- The quantity of information on each topic and, if there is a significant amount of writing on one area, it may suggest pulling that section out as a separate document and addendum to the main strategy.
It’s a powerful tool that we find extremely useful and, in case you’re wondering, I’ve named it “Barry” after our extremely talented and efficient Delivery Director, who has an exceptional eye for detail.
It’s Been Emotional
We’re developing a whole suite of these cognitive tools to ensure that the team is always at its most productive and efficient. Another of my favourites, currently in development, is a tool that is designed to understand how we make decisions. For example, it examines test completion reports that determine whether we can move onto the next stage or go live. Using APIs from Microsoft, Google and Amazon, the tool takes pairs and triplets of words in the report and gauges the sentiment and emotional connotation. It then grades the report on a scale – negative to positive. If the report seems pessimistic then we can examine the project in detail to determine what the challenges are and why the author has taken a negative view. If for example the report seems to be of negative sentiment and the conclusion is “don’t go live” then the outcome reinforces the sentiment. If the report seems positive and the conclusion is not to go live or progress, then we need to look into the apparently mixed message for more information and to see why this conclusion has been drawn.
As we grow our understanding of human decision-making in this way, we can start to apply the logic to build rules. These rules can then be programmed into our cognitive computer systems, enabling them to make ever more complex decisions based on and mimicking the thought processes of our human Test Managers. This in turn, frees our testers up to do the work that (for now, at least) only a human can do.