Skip to Content

IS AI NOW SMART ENOUGH TO INCORPORATE “CHURCHILLIAN LOGIC” AND HELP US BREAK THE CYCLE OF REPEATING POOR DECISIONS OR OUTCOMES?

August 15, 2025
Alistair Gerrard

It is often said that the most famous quotations from the Twentieth Century can be attributed to either Winston Churchill or Oscar Wilde. Yet the quote which came to mind to spark this piece was in fact by philosopher George Santayana (or Jorge Agustín Nicolás Ruiz de Santayana y Borrás in his native Spain!), and it goes like this:


Those who cannot remember the past are condemned to repeat it.


This original quote has spawned a few variations over time and, it seems, several misattributions. Those are not for analysis or discussion here but instead I want to focus on the meaning of the original quote.

At its simplest level this quote embodies the notion of looking to the past to prevent repeating previous mistakes. This can be a safeguard in the form of historical awareness, as a cultural and collective memory, or on a more personal level whereby we learn and grow from our own experiences and mistakes.

If we apply this lesson to software testing through the lens of artificial intelligence, then there is a possibility we can avoid timely or costly issues in advance. Sogeti’s CognitiveQA was powered by AI to provide just this functionality. By learning from previous patterns and behaviours Cognitive QA was able to pre-empt defects and help focus test efforts where they were needed most to ensure success.

Therefore, Cognitive QA was able to bring historical learning to Quality Engineering and Testing activities, using pattern recognition over time to build an understanding of past events. And even identifying events which led to particular outcomes, using causal inference to bring our focus to items requiring our urgent attention. Items which would often appear on track at the current point in time. And yet when viewed through the lens of patterns in historical data, have a higher probability of soon becoming issues in comparison with items which currently appear to be already veering off track.

A core enhancement to systems based on historical learning has been the introduction of agentic systems, capable of acting independently based on goals but also learning from feedback and adapting their behaviour accordingly. Such systems use reasoning or planning to pursue specific goals or objectives.

The real power of AI can be unleashed when historical learning and agentic systems can be combined. We can test hypotheses and strategies based on historical data sets. Major events like Black Monday (October 19, 1987), 9/11, and the 2020 COVID-19 outbreak could be reimagined with different proactive and reactive strategies—allowing us to assess how alternative responses might have changed the outcomes. Or on a relatively smaller scale, IT projects such as warehouse automation, a new insurance platform, or a new airport terminal could be used to learn from, and to prevent repeating expensive and reputationally damaging mistakes. And, referring back to the wise words of George Santayana, systems can identify analogous scenarios and predict future outcomes, giving us advance warning and enabling us to test different measures to prevent repeating an undesirable outcome. All from a system which will also learn and adapt as more data is made available for it to consume and process.

There remain challenges to overcome though. Historical data may incorporate bias which skews learning. Early voice-controlled assistants, such as Alexa and Google Assistant, struggled to recognize female voices thanks to having been initially trained using male voices. In 2016 a chatbot was shut down after it responded with increasingly posted inflammatory and offensive messages, allegedly being “trained” by a sustained attack which exploited the bot’s learning how to reply based in interactions with people.

And so, as AI evolves (or is being evolved?) from its recent surge in popularity, I believe it will be able to learn from vast datasets but that it will also be able to learn acceptable behaviour or outcomes over unacceptable behaviour or outcomes. At this stage of AI’s development though, we remain obligated to steer the development in the right direction. The risk of not doing so is the risk of reputational damage which could have serious consequences for an organization, and even after only a very short period of time.

At Sogeti we have demonstrated our dedication to getting this right, both by adapting our testing methodologies to include testing AI, and in the development of our AI-based accelerators, where our focus on controlling the inputs and outputs gives provides confidence in the quality of the results.

About the author

Managing Consultant 1 | UK
Alistair leads technical and non-functional testing initiatives, combining deep technical expertise with strategic thinking to drive project delivery and team leadership. His role spans supporting pre-sales, mentoring testers, and shaping testing strategy at Sogeti UK.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit