When it comes to technology, I am a bit of a perfectionist. I expect gadgets not just to do the work they are designed for, but also to be simple, intuitive and integrate seamlessly into our lives. Let me explain what I mean by this. For a moment, close your eyes and imagine yourself soaking in the bathtub after a long tiring day. Everything’s perfect, the water is just the right temperature and all your work for the day is done. Just then, you remember a song that you’d like to listen to. It’s on the spotify app on your phone. What do you do? Do you get out of the bath to get it? Even if your phone is close at hand, at the minimum you risk getting soap and water on it as you navigate to the song. Or what if you want to text someone on Facebook messenger while you are in the bath?
It is in situations like these that I find myself irritated with current level of technology. I mean, think about it, shouldn’t there be technology that allows you to interact with your phone without having to getting out of the bath or picking it up with soapy hands? It’s unnatural. Or think about how you read and answer messages on your phone. Every time a message comes, you have to pull out the smartphone from your pocket, bring the screen near your face, tap on it to type out your message, all the while looking at the screen! This mode of interacting with our machines is unnatural and limiting. It is an example where human beings have had to adapt their behavior in order to support machines instead of the other way around. Personally, I hate it.
However, things are changing. Remember how earlier you had to toggle switches in order to turn the lights on in a room? In cases, where you were in an unfamiliar room you had to fumble around the walls searching for the switches. Today, thanks to motion-sensing technologies, there are many homes and offices where you can just walk in and the lights switch on automatically. This is what I mean by a natural and efficient interaction, one in which technology is integrated into our daily lives and supports human activity with a minimum of fuss. Today, there is an entire emerging field known as Human Machine Interactions, which studies our interactions with machines in order to simplify them. Thanks to a number of technologies, such interactions are becoming ever simpler. These technologies are:
- Cognitive Services: Cognitive Services refer to those technologies which help a machine perceive the commands from the external environment, or more specifically a human during human- machine interactions. You can think of these technologies as analogous to our senses- the senses of touch, smell, taste, vision, and hearing which help us perceive and observe the external environment. In much the same way, Cognitive Services allow computers and machines to take inputs from the environment. An easy, everyday and highly natural example of such a technology would be the one that allows most of the latest laptops to use facial recognition to login users, thus bypassing the need to type in username and password as was the norm earlier. Another such example is Amazon Echo which allows you to talk to your machine instead of having to type.Cognitive Services is a hot new area, and over the last few years, some of the biggest technology companies such as FaceBook, Microsoft, Google etc. have been focusing on it. Today, Microsoft offers APIs to help machines perform a number of tasks from understanding the natural language to video analytics. A number of projects are already underway and they are pushing the boundaries of how we interact with machines. For instance, Elon Musk’s Neuralink aims to allow humans to interact with machines just by thinking of it. Now, what could be more natural than that?We even have technologies that allow machines to smell bad breath and taste food.
- Omni-channel: Omni-channel is very much a buzzword today and it basically refers to the multiple channels through which we interact with machines. While earlier, we had only laptops and computers; today we have PC’s, tablets, laptops, smart glasses, smart earbuds, smart watches as well as devices like Amazon Echo and Google Home. So today, most companies planning new products have to plan for the host of channels through which customers can interact with their offering. Taken together, the advances in Cognitive Services and the growth of omnichannel interaction are giving rise to more and more devices which are not just seamlessly integrated into our environments, but also interact with us in a very natural manner.
- Artificial Intelligence: If Cognitive Services are the senses and omnichannel is the body then Artificial Intelligence is more like the brains of a machine. Developments in Artificial Intelligence have come to a point where computers can understand the context and can evolve and teach themselves over time, so they can literally be better at something tomorrow than they are today. All of this is helping to improve the quality of our interactions with machines, giving rise to a much more human experience.
Artificial Intelligence capabilities have increased exponentially over the last couple of years. It is believed that this might develop to a point where we may no longer be able to differentiate between a machine and a human when interacting with it. Just recently, Google’s Artificial Intelligence machine, AlphaGo beat the world’s No. 1 player in the ancient Chinese game of Go. This is considered a landmark in the evolution of Artificial Intelligence, since Go is an extremely complex game, with, reputedly an incomputable number of move options and a heavy reliance on intuition, instinct and the ability to learn.
So how can these three different technology trends come together to help us improve our interactions with machines? One of the best examples to understand this lies in chatbots. Chatbots essentially use cognitive services to take in our requests and artificial intelligence to process them and can be accessed through many different channels.
At this moment we see chatbots are focused on only one function, for example, an insurance company can have a chatbot to take in your insurance claims and answer your queries. The main difference between the chatbots of today and those earlier is that a few years ago it was obvious the chatbot was a machine. These earlier chatbots operated under very narrow parameters and so if you said something that they could not register or recognize, these chatbots would be unable to help you. Today, however thanks to AI as well as Cognitive Services, chatbots have evolved to a point where it is sometimes difficult to make out if you are interacting with an actual customer service representative or a machine.
Thus, in the near-term, I predict that we will be seeing a lot more chatbots across many different areas of our lives. As the above technology trends mature and consolidate, companies across sectors are likely to employ them and chatbots might also become a part our homes, a single chatbot and identity shared across many different devices.
However, it is important to appreciate that there are some contexts in which chatbots can be extremely useful while not being perfectly suited for others. In my view, chatbots will mostly be used for scenarios which involve short and direct interactions between humans. Anything more, and we might want to have humans.
This, however, is just for the short-term. In the longer-term, there is no doubt about the direction in which we are headed. We are headed for a future in which machines and our interactions with them are seamlessly integrated into the ways we live and operate. Smart homes today are being especially equipped with technologies like sensors etc. but in the future, all such technologies will possibly be built into all new buildings from the ground up.
I would love to hear what your thoughts are about the trajectory human machine interactions might take in the future. If you have any opinions or thoughts on this please do share them in the comments section below.