Skip to Content

A Chatbot Must Tell It is a Chatbot

Rik Marselis
September 18, 2017

chatbotRecently I started investigating how to test a chatbot. As an example, I put a couple of teams to work to try testing a simple chatbot on Facebook messenger. This chatbot assists people with finding travel options. In the beginning, I warned them this same travel company also has a chat-function with a real person. In order to be sure not to bother the real people, the first question to the chatbot was “are you a chatbot?”. The chatbot didn’t understand the question and asked to what city we wanted to travel.

If I think about dilemmas that come with robotics (of which chatbots are part, even though a chatbot doesn’t have any physical appearance) I often think back to Isaac Asimov’s stories about robots. You’ve probably heard of him because he introduced the three laws of robotics (mind you, back in 1942, long time before robots became feasible).

In 1974 Lyuben Dilov added a fourth law which is “A robot must establish its identity as a robot in all cases.”

Obviously, the travel-booking chatbot that we were testing didn’t have this law implemented.

Moreover, it also wasn’t aware of Nikola Kesarovski’s fifth law “A robot must know it is a robot”.

These laws become very important when robots will be so good that it’s difficult to distinguish them from people.

For us when testing the travel-booking chatbot, we cannot prove from a chat perspective if it is a robot. Our finding is a lack of intelligence. For example when we said we wanted to travel home, the bot didn’t ask where our home is but simply tried to find a way to travel to a little town called Home (it’s next to San Diego in California!).

But for more sophisticated chatbots, like Siri or Alexa, we really would like to have these robot-laws implemented so that testers (and everybody else actually) can make sure whether they are dealing with a human or a robot.

Which of course will trigger a new dilemma: How can you test if it is speaking the truth? You can try the Turing-test to see if you are dealing with a human or a computer. But actually some people fail the Turing-test and some computers have already passed the Turing-test. So how can we test that???

My call for action is that anyone creating intelligent machines implements these robot-laws. Testers can use the five laws as a checklist for the first basic test.

I’d like to take this opportunity to add a new robot-law: “A robot must always tell the truth!” Although I realize this will be very hard to implement because nowadays truth seems to depend on someone’s perspective…

For the moment my conclusion is: We need to find out much more about testing chatbots. It’s going to be hard to test these cognitive algorithms in our digital era. But it’s also going to be fun!

 

 

About the author

Quality and Testing expert | Netherlands
Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He has assisted many organizations in improving their IT-processes, in establishing their quality & testing approach, setting up their quality & test organization, and he acted as quality coach, QA-consultant, test manager and quality supervisor.

    Comments

    One thought on “A Chatbot Must Tell It is a Chatbot

    1. Are the asimows robot laws really relevant? Are they the best we can do?
      Aren´t they just a part of a sci-fi story?
      Aren´t they old?
      Aren´t they designed to have at least one flaw to create a good story?
      I agree that the new robots should have control mechanisms, but I don´t think that those are formulated by Asimov.

    Leave a Reply

    Your email address will not be published. Required fields are marked *