Skip to Content

The Dark Side of AI

Mar 7, 2022
Steve De Smet

No, this blog is not about the next Skynet or the potential uprising of our robot overlords. This blog isn’t about bias in AI, or other known flaws linked to the usage of AI either. This blog is about shining a light on a lesser-known topic when it comes to the usage of AI: what if AI becomes an impediment in your development lifecycle instead of an accelerator?

Regardless of which type of Artificial Intelligence we’re talking about, we cannot ignore the fact that ‘AI’ has been a hot topic in a variety of sectors for the past decade. AI is great, AI is the future, everything will be faster, better, and more optimized with AI. It is regularly over-hyped and has on occasion become the goal instead of the means to a goal.

Sometimes AI can be a great accelerator, sometimes it is a gimmick at best – and in some instances, it can become a burden on your development pipeline.
If you know The Avengers, you know we’re still quite a while away from (mass-) producing Jarvis-like AI’s. The all-knowing, self-learning and ever-evolving AI could very easily pass the Turing test.
Rather, in our day-to-day lives and our client assignments, we work with ‘dumb’ AI’s – AI’s designed to do one specific thing or a specific set of things: chatbots with predefined answers to recurring questions, algorithms on e-commerce websites (“We see you bought product A, so you’ll probably also like product B”), smart home applications, etc.

Therein lies the danger – if not properly evaluated and investigated, the very thing your AI is good at might become a risk for further development. Simply stated: it could very well be that the development of your new functionality is quick and trivial, but it might take your AI a lot longer to learn and adapt.

A practical example of this issue came with the launch of Microsoft’s flagship title ‘Halo Infinite’. For Halo Infinite, 343 Studios developed AI bots that roam around the map and shoot you at sight to score points. They even added difficulty tiers to the bots, up until the point where the highest difficulty bots could easily compete with the average human player. A prime example of an AI doing a specific task, and doing it well. Sounds awesome, right?

Now here is the kicker: where earlier Halo installments – without AI – launched with a full spectrum of game modes, Halo Infinite went live rather barebone. While Microsoft has been silent about the topic, rumours are that the additional game modes already exist and were easy to implement – but it is taking more time than expected to teach the bots to adapt to these other modes. Where an AI bot previously only had to walk and shoot, it now also has to think about the game objective and act accordingly; something that is proving to be painstakingly more difficult than the actual functional development of the new modes itself.

This example is not the only one where ‘the AI devil’ is in the details. In 2016, Elon Musk promised that in 2017 a driver-less Tesla would drive from LA to New York. To this day, that still has not happened yet. Some accidents have obviously caused bad publicity, but the main reason for the delay seems to be that it is a lot harder than anticipated to teach the AI to deal with all of the different exceptions and complex decisions it might face in reality.

When designing traditional software products or systems, quality attributes are usually considered to evaluate the solution: ‘how scalable is my solution?’, ‘how easily can it be maintained?’, ‘what is the degree of reliability?’, etc.
As more and more services or products are likely to include some form of AI, it should be considered to add an AI fit-for-purpose check as a quality attribute as well. Adhering to the shift left mentality, properly considering your AI solution is as important as any of the traditional quality attributes: if not properly evaluated at the beginning, they might result in large or expensive issues further downstream.

About the author

SogetiLabs Country Lead | Belgium
Steve is a strong advocate of Quality Engineering throughout all phases of the SDLC. With almost a decade of background in Digital Assurance & Quality Engineering, he has gathered experience through various roles within the craft: Test analyst, Test Manager, Program Quality Manager, etc.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit