Skip to Content

If you cannot hold it accountable, don’t let it make decisions.

Mark Huss
Apr 25, 2024

I suspect that I, like many others, am increasingly concerned about the behavior of AI and our more advanced algorithms. While AI and these algorithms may lead to job loss due to efficiency gains, that’s an inherent consequence of technological advancement. The more pressing concern is the role of AI in determining outcomes and making decisions, particularly decisions that affect people’s lives.

The Trolley Problem serves as a classic example of the dilemma algorithms (of which AI is one example) will face. It essentially boils down to “Do you let the AI-driven trolley kill one individual or many?” The issue with this problem lies in assuming that an algorithm ultimately should be making the decision.

However, what the Trolley Problem overlooks is whether we should allow an algorithm to be in a position to make that final decision at all. 

Throughout our lives, we are taught/trained to be good decision makers.  A significant part of being a good decision maker is understanding that there are consequences for poor decisions – you will be held accountable. 

So, how do you teach an algorithm to be accountable?  It lacks self-awareness. It doesn’t experience loss or pain if it errs. It doesn’t feel satisfaction when it succeeds, nor can it be rewarded for a job well done. Moreover, you can’t simply instruct an algorithm  about right and wrong by feeding it words.  To a machine, this input is merely a collection of data points, devoid of life, feelings, or impact.

Given these considerations, what should we do? 

Firstly, we must prevent AI/algorithms from making final decisions that impact human lives and ensure a knowledgeable human makes the decision.  This includes areas like healthcare claims decisions, clinical therapies, care pathways, weapon firing decisions, etc.  When these decisions are made, it’s crucial that they are based on a comprehensive understanding of all data and potential consequences.

Secondly, we need to legally assign responsibility, accountability and liability to both the creators and the users of algorithms and AI. Ensuring transparency, auditability, and immutability of every step, logic, and data leading up to a decision is essential.

This doesn’t mean we should avoid using algorithms/AI altogether. Their advantages are too significant to ignore. However, we must ensure that critical decisions affecting humans are made by humans. If individuals or entities choose to ignore this principle, they should be held accountable.

About the author

Mark Huss

Insights & Data Regional Practice Director and National Healthcare Leader | USA
Mark has more than 25 years of building and leading IT organizations, architecture and engineering groups. With a background in Linguistics and AI, he is a believer in the power of technology to better the world, improve companies and make things better for employees.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit