Skip to Content

When AI “goes bad”

Richard Fall
January 16, 2019

The benefits of the wider use of Artificial Intelligence (AI) are clear to the businesses that are rushing to make use of it.

We see chatbots that are replacing humans in customer service roles, reducing costs of operation for the businesses that employ them.

We see personalized selections on social media, entertainment channels, and e-commerce sites that tailor the offerings to the user’s behaviors, providing a more targeted selection of items.

We see it when we use Google maps for providing directions to a destination.

These are among the more visible uses of AI–visible in the sense that, if we take a moment to think about the “main behind the curtain”, we know that there must be an AI component that produces the results we see. Netflix recommendations are a good example of this.

There are, unsurprisingly, more “hidden” uses of AI–systems in which AI is a component but which we, as beneficiaries of that use, are unaware.

One such use: in the United States a system designed to provide sentencing recommendations to judges is widely used and is based on Machine Learning designed to provide a sentence that “best fits the crime and the criminal”. I have written about the perils of this system here.

These “hidden” uses of AI are intended to benefit us–for instance, by removing the human component in decision-making that often fall prey to human fallibility. These uses, while in the main serving to provide safety, sometimes do exactly the opposite.

The law of unintended consequences applies equally to AI decisions as much as to human decisions.

A case in point: earlier this year Lion Flight 610, operating out of Indonesia, crashed shortly after takeoff from the Soekarno–Hatta International Airport in Jakarta, killing all 189 souls aboard.

This October 29, 2018 crash could have been just another case of human error, either by the pilots or the maintenance crew.

But, upon investigation, it became clear that this disaster was a result of the “bad side” of AI.

The failure of a single Angle of Attack (AoA) indicator on the aircraft caused the flight control system to think that the aircraft was pitched up too high and took corrective measures, pitching the aircraft nose down. In the case of older flight control systems the crew would be warned about this condition and left to take corrective action, without direct intervention by the system itself.

However, the flight control system of the Boeing 737 MAX 8 was the newest available, and included an AI component that took a more active approach to the problem.

As a result, while the Lion Air crew attempted to bring the nose of the aircraft back up, it was fought by the AI-based flight control system which, based on erroneous information, “knew better” than the crew what action to take.

The end result: after 11 minutes of battle, the flight control “won” and the plane dove into the Indian ocean, without the flight crew fully understanding the problem.

We still don’t know the exact reason the pilots of that fatal flight couldn’t disable the smart system and return to manual control. It looks as if the sensors were off, instigating the downward spiral. A report by the Federal Aviation Administration in 2013 found that 60 percent of accidents over a decade were linked to confusion between pilots and automated systems. The Deadly Soul of a New Machine, New York Times

The last sentence in that quote should be alarming: the majority of airline accidents were not due to human error, per se, but to a battle of wills between humans and machines. This is not what the designers of AI-based systems intend.

None of this is intended to suggest that we are better off without AI in our lives–quite the contrary. It is to serve as a reminder that, as AI inevitably becomes a larger component of the systems we interact with on a daily basis, no matter how direct and obvious, we should take into account that there are always humans that are ultimately affected by such systems. And that, at least in some cases, humans may be in a better position to make decisions until the day–if ever–that AI systems are fully capable of providing unalloyed benefit and can be entrusted with human safety without human intervention.

That day, I submit, is still far off and until then we should tread lightly in assuming AI systems will never “go bad”.

About the author

National Solutions Architect | United States
Richard has been a practice lead in the Digital Transformations (formerly Mobility) practice at Sogeti for 2-1/2 years, originally in the Des Moines office and now in the Minneapolis office. In that role, he has lead major architecting efforts at a number of Sogeti clients, resulting in creative solutions to difficult problems, winning client loyalty and business.

    Comments

    One thought on “When AI “goes bad”

    Leave a Reply

    Your email address will not be published. Required fields are marked *