We talked about Google Glass before, there are some really positive articles out there, and some that even go as far by claiming the zombie metaphor as the appropriate one to describe the usage of this new wearable device. Enough opinions, only few people have actually tried it on. But as of yesterday, I am one of that (lucky) few. I was attending a Google Glass event yesterday with the editor in chief of Engadget Tim Stevens. The event was part of a smart move from a few entrepreneurs who created some buzz around their new company by getting Tim to the Netherlands. I get to play around with Glass for a while and talked about possible applications with guys from the optician industry, a beer brand and a few other companies. Here’s my experience. The first thing I was surprised by was the seamlessness of how the thing fits on your head. I am not a regular glass wearer myself, but it didn’t feel awkward at all; it’s light and it doesn’t block your vision in any way. A good thing to know is that the current Explorer editions of Glass are a year old, somewhere in the Google X lab there are some new versions already who are probably even lighter and have a better camera and what not. My 2 big take aways are: 1- The current version is really limited (yes we know that already). You can search, get directions, take a picture/record a video and send a message. You need the follow the pre-programmed commands or else Glass just doesn’t listen. That’s the main flaw at this moment: you can just give commands, but the device really becomes interesting once you can ask it questions (Walking from work to the bus and asking what time is my bus leaving for example). Also the touchpad interaction takes some time to get used to: the touch surface has the same tangible experience as the whole side of the Glass, so you will be tapping at the wrong places for a while. Also opening up a full API will give Glass a huge boost in applications of course. Than there’s the price: for a consumer product this thing shouldn’t cost more than $600 dollars at first, and $300 within 2/3 years. 2- Yes, this might be the new iPhone. It really feels like such a moment to me. The iPhone really paved the way for new type of information behavior. I feel like a smartwatch is just like a smartphone only the experience moves from your pocket to your wrist. This really feels like a new paradigm (although wearable devices also have a long history already) of information experiences with a device that is more out of your way (we can stop tapping a piece of LCD) and can be more anticipatory and contextual. Although I do feel that a place on the wrist is more suitable than on your face, especially from a mass adoption standpoint. Glass vs Smartphones: less emasculating? According to Sergey Brin Glass is less emasculating than a smartphone, mainly because we don’t have to stare down to a screen all the time. We talked about this with Tim and he can sort of subscribe to this argument. With a phone every vibration and sound triggers you to take the device out of your pocket to judge if the incoming information is relevant enough to check out. Glass enables you to filter out the crap and only focus on the important messages coming. But these messages will have the same immediacy as they have on smartphones, meaning your going to zone out of everything your doing and read the message. Future applications We dreamed up some applications for future iterations of Glass. Here are some of the idea’s:
- Real Time Translation
- Augmenting and sharing musea experiences
- Shopping: real time credits info as you pay etc.
- Gaming: yes, this could be an awesome immersive gaming experience
- Instruction video’s
- Contact management think of a HighLight on steroids
- Becoming your own virtual personal assistent.
- Telepresence and Virtual displays