The Future of Human/Computer Interface Raises Disturbing Issues
In this age of “always on” social media, we can share the events of our daily lives with almost anyone in the world, and with the press of a few keys.
This sharing is made even easier with the use of the speech-to-text capabilities that are built into our mobile devices (and now even our desktop devices).
Think it, speak it, and whatever your thoughts, they’re shared with your family, your best friends, the world at large if you so choose.
We all know the downsides of such easy communication–that uncle who mindlessly posts his less than tasteful views on politics; the social media star who gains fame by posting vapid blogs and shameless selfies.
But, for most of us, the need to gather our thoughts and marshal our social filters–combined with the time it takes to get our thoughts out in a coherent form, act as a kind of safety valve that prevents us from simply spewing our innermost thoughts in a random manner.
Seeing that “Submit” button there, waiting to be pressed before submitting your writings to the wide world, gives most of us just enough pause to prevent the disaster of over-sharing.
Not only that but whatever is in our heads but not committed to text, cannot be mined by advertisers or others for their corporate use.
The sanctity of our minds is the ultimate form of protection against others.
Until now, that is.
Facebook has just announced that they are working on a technology that will permit you to “type directly from your brain”. The goal of this technology–still in its infancy–is to develop non-invasive sensors that will turn your thoughts into text directly, without the intervention of your fingers, your vocal cords, or any other physical intermediary.
This may sound like the stuff of science fiction, but advances in Functional Magnetic Resonance Imaging (FMRI) have already made it possible to detect areas of the brain that “light up” when specific images, words, or scenes are recalled. With more powerful processing and inexpensive sensors, the next step is certainly the ability to detect “ideas” and translate them to words with proper training (of both the user and the software).
However, with the removal of the physical from the “idea to written word” creation chain, a whole host of potential problems arise.
Do we, as humans, have the ability to hide certain thoughts from such a system? Experience, and recent studies say that is very hard. The mechanism provided by our brains which filter out inappropriate thoughts from our fingers or our vocal cords are “downstream” of the thought creation process. If the “neural keyboard” works upstream of this filtering mechanism, then anything we think is accessible to such a system.
In my opinion, there seem to be at least two major problem areas that such a technology would present:
- The lack of a filtering mechanism would likely result in sharing of much more inappropriate material in a social context. Do you really want your boss to know what you think of them?
- With the current legal models assuming that what happens in your mind can stay there, inviolate, what happens when a company uses this technology to “hear what you’re thinking”, even though you have chosen not to share it, to make advertising decisions, for instance?
The first problem might be resolved by learning mechanisms that filter thoughts as they’re being created to allow us to think what we will, but affirmatively choose what to share. It’s unclear if this is even possible (can you not think about a white elephant as you read this sentence?) and how well it would work.
The second problem is likely to require a legislative solution–e.g., that a company is never allowed to use, in any way, your thoughts without express permission. Not that this would be easy–attempts to prevent ISPs from accessing and using your data that flows over their network, even your most private data, have recently come to an ignominious end.
Surely we can rely on companies to police themselves? As Facebook has said:
But in order to ensure that Facebook is only translating the thoughts you want to share, the company says it will need to build new sensor technology that can better detect brain activity at lightning speed.
(Let’s ignore the fact that building a “new sensor technology” has not a whit to do with ensuring privacy, but let’s take Facebook at its word that they intend to honor the privacy of your thoughts.)
How well has self-policing worked in other cases where the cost of the controls, in terms of lost revenues, eventually overcomes self-enforced regulatory models? Not well, I’d argue.
So, what to do?
I have no easy answer, just a firm suggestion that we being to consider these issues, and craft thoughtful responses to them before we arrive at the moment of truth.
That moment being the day when your thoughts are no longer entirely your own.