Skip to Content

Musings on Ethical Systems and Robotics

Sogeti Labs
July 30, 2021

Would you trust a robot to make a decision for you? No . . .Yes? Maybe?
Should you trust a robot to make a decision for you? . . . It depends.

As advances continue to be made in the worlds of robotics and artificial intelligence, humans look to take advantage of this technology to help understand complex problems and assist in making decisions. When it comes to AI making decisions, there are many who will jump at the chance to scream Skynet and tell us how AI will take over the world and destroy humanity in a very dramatic fashion. Personally, I find it a bit difficult to take those claims seriously when Siri still can’t figure out my accent. Nonetheless, there are prominent figures within the industry who have expressed their concerns about the future of AI. Even if they don’t believe that some sort of rogue AI will become the enemy of humanity, the general concern about the potential capabilities of AI remains. As such, the general consensus seems to be that humans should implement some form of ethical frameworks resulting in what is dubbed ethical robots.


Most recently, I had the pleasure of reading The Dark Side of Ethical Robots, a paper published at the 2018 AIES conference. The premise of the paper is that the entire notion of ethical robots has an inherent flaw; namely the construction of “ethical robots also inevitably enables the construction of unethical robots” [1]. The authors show through a series of experiments that the programming of an autonomous robot can be altered to behave in undesirable ways with relative ease resulting in hyper-competitive or aggressive behavior. They further state that adopting IEEE “human” standards [2] alone does not prevent potential for harm nor is guaranteed to provide the benefits desired. Despite all the work being done to propose an ethical standard from various governing bodies, there is always the threat of malicious actors would who care not for rules and regulations. Furthermore, this says nothing of undesired behaviors arising as a result of technical negligence, software bugs, hardware failure, and more. In light of this, the authors conclude that perhaps it not so wise as to embed ethical decisions within robots, especially those deemed critical to real world safety.

While I personally agree with the findings of the paper, I think that from a practical perspective, this is not feasible. A machine such as an autonomous vehicle is going to be making decisions; that’s simply the nature of an autonomous machine. For the sake of argument, let us contrive a scenario in which there are no “correct” decisions. How should a machine act? (For that matter, how should a human?)

Herein lies the difficulty of the problem: ethics is relative. What I believe to be the best1 decision maybe the wrong decision to you. Based on whatever value system we use, the outcome can quite different. Who is to say that the values one person favors are better than the ones favored by someone else?

Further, increasing the difficulty of evaluating a value system is that “best” is relative not just between individuals but also varies based on circumstance. Are we in a state of wartime or peace? Do we have a drought or a famine? Our environment plays a crucial role in our decision-making and provides not just a foundation, but context in which we can use to make decisions.

As if it wasn’t difficult enough, laws and regulations are different in each country. It would be very difficult, likely bordering on impossible to make some sort of ethical framework that respects the potentially conflicting laws and regulations of each country and region.

The question I want to leave you with is should we have ethical frameworks? If so, who should write the guidelines, what should be in it, and how are they enforced? If not, how do we deal with the advances in AI and their potential consequences with regards to autonomous decision-making? In a future post, I hope to explore these ethical questions and provide an overview of ethical frameworks as they are currently being utilized today as well as their implications — both good and bad.

__________________________________________________________
1Note: The term best here is in itself also relative. The great irony is that what I believe to be the best overall decision to make can also be considered the worst ethical decision.

References


[1] Dieter Vanderelst and Alan Winfield. The Dark Side of Ethical Robots. 2018. url: https://www.aiesconference.
com/2018/contents/papers/main/AIES_2018_paper_98.pdf.
[2] IEEE. IEEE SA – The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 2016. url:
https://standards.ieee.org/industry-connections/ec/autonomous-systems/index.html.
2

About the author

SogetiLabs gathers distinguished technology leaders from around the Sogeti world. It is an initiative explaining not how IT works, but what IT means for business.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *