The 3 Laws of Robotics
A long time ago, especially in IT terms, an American writer called Isaac Asimov created the three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
They sound pretty good. However, Asimov also wrote dozens of novels and stories that demonstrate all the flaws within such laws how they can be easily manipulated to dramatic effect. That’s fine when you’re a writer, but I’ve seen a lot of people suggest that these laws (more like rules, guidelines or suggestions really) be used to control robots and AI s in the real, modern world.
What’s good for a writer isn’t always so good for the rest of us. Asimov found a lot of loopholes (he would have been a good software or security tester), but we want something more viable fitting for the 21st Century.
So where do we start?
7 Key Values
I think the secret is to move away from 1s and 0s and focus on those nuances that make life interesting (in a good way). Something that turns IT from a two-dimensional game of “yes” or “no” and into something more human. Or at least humane. The answer came to me when reviewing one of our annual training manuals:
Those are Sogeti’s 7 key values. They are pretty good and a strong way to navigate through the complexities of the 20xxs. To work with humans, a computer system that is intelligent, that understands what we need, must understand those words and what they mean to the people it deals with.
It’s part of being human. It’s part of being social. It’s part of being more than just a yes or a no. A one or zero.
Be happy. Check out our report on digital happiness and for the value it can bring to your business here.
Want to be happy? Retain Sogeti. Robot Rules for the 21st Century