Skip to Content

A tester’s thoughts on Automation and AI: 3

Eva Holmquist
May 18, 2021

There is a common belief that AI makes unbiased decisions. In reality, the kind of narrow artificial intelligence that exists today is far from unbiased, see for instance Richards Fall’s articles, When AI “goes bad”  and Algorithms and Bias in the Criminal Justice System which talks about this very issue. You also have a great article by Rahul Bhargava that talks about the need to shift focus from the learning to the teaching aspect of machine learning. Because that’s the thing, no machine is learning in a vacuum. They are learning in our world, that’s full of bias. It’s also humans that choose which training data to use and what criteria to use for decisions. There is a lot of examples of bias in systems with learning capabilities that are evident, for example, racist twitter bots, recruitment systems only choosing male applicants and systems that predict black defendants will have higher risks of recidivism than they actually do. I wonder how many biased systems there are that we haven’t discovered…

This means that when we test systems with learning capabilities, it’s important that we test for these aspects as well. Is the system behaving with unaccepting prejudice? And because it continues to learn, we also need to monitor to catch biased behaviour. How do you suggest we do this?

About the author

Senior Test Specialist | Sweden
Eva Holmquist has more than twenty-eight years of professional IT experience, working as a programmer, project manager and at every level of the testing hierarchy from a tester through test manager.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *