Skip to Content

Never Mind Why! Let’s get an AI

Andrew Fullen
August 13, 2018

Reading through the Quality Characteristics report (https://www.sogeti.com/globalassets/global/downloads/reports/machine-intelligence—quality-characteristics.pdf), It struck me that every time we create a new technology to help us fix a problem, many of the same problems – knowledge gaps, hype, high cost/low return, and products rushed to market before they or the market are ready – come back to haunt us.  And they always bring new “friends” tagging along as well. Now we have a situation in which an AI can produce answers that we cannot validate because it has been learning or seeing things that we cannot see or learn. This is both a problem and an opportunity because, while the answer may be correct for the AI, it is not necessarily correct for the user or the business. We could be asking the wrong question based on what we expect, or selecting data that supports our preconceptions – face recognition algorithms have turned out to be very good at recognizing people involved in facial recognition as they’ve used their own photos too often! Already I am seeming organizations implementing an AI and then thinking what they can do with it. A lot of times they want to use it as a direct replacement for an existing system and they want to get the same answers as they already get. But there is so much more that can be done with an AI. They can learn. They can process far more data than we can and open up the data and give us insights into it, and patterns that we have never seen before. In exchange, they give us “unexpected results,” not as an error, but as a certain and valid outcome. For many people involved in IT, especially in testing, “expected results” is the mantra. But what do we do when we don’t know what the expected results should be? How do we trust that the machine is telling us what we need to know, rather than what we think we need to know?  And to make the best of it, it all comes down to the data and how good it is. Look at the case of IBM Watson’s Oncology System. It’s recently been reported that Watson has been making very unsafe medical suggestions (https://nation.com.pk/29-Jul-2018/ibm-s-ai-suggested-unsafe-treatment-for-cancer-patients) No patients were at risk because the hospitals identified the flaw in the recommendations. Investigations by them and IBM showed that the problem lay not with the Watson approach but with the data it was trained with. Rather, it had been fed hypothetical data generated by doctors associated with the project, and like all of us, they brought their own experiences and biases to the data they created. That in turn took Watson down paths that it would never have generated if it had been trained with real world, non-synthetic data. The data and the quality of that data and the realness was critical. If you want to see more about how to address this and other related issues, check out https://www.sogeti.com/globalassets/global/downloads/reports/machine-intelligence—quality-characteristics.pdf            

About the author

Head of Technology and Innovation | Sogeti UK
Andrew’s the Head of Technology and Innovation for Sogeti UK, joining the group back in 2009. In this role, he has worked with major clients across government and private sectors. Andrew joined Sogeti UK back in 2009 and is currently the UK’s head of technology and innovation.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *