Having information projected by a heads-up-display (HUD) into ones vision is an intriguing idea. Google’s consumer-targeted device Glass deserves credit in pushing the boundary and hopefully the product will ship.
There are many things that come to mind if one thinks about the implication of day-to-day use of such a technology, among them privacy and dorky looks. However considering smartphones and their uniqueness and social acceptance almost everywhere success for Glass is more a matter of functionality and lasting coolness.
The apps Google has shown so far haven’t convinced me that I need Glass. The sport-focused scenarios make sense since they showcase the hands-free benefit of Glass. However in most of the other scenarios I am happy to use my phone to get time, take a photo, or record a video.
It is an interesting exercise to think about what features characterize a useful app on Glass.
At first glance one could assume that Glass as a location and context aware device would be a good fit for a traditional map application. The limited capabilities of the HUD make this questionable. I believe that functionality such as a virtual compass that communicates location information associated with objects is more appropriate.
A more interesting scenario is the lookup of knowledge about the objects one is dealing with as part of a task, i.e. searching for information about people, places, things, etc. Glass would be super useful as a context-aware knowledge lookup service.
Have you ever been at a conference and walked around wondering who all these people are or struggled to remember a face or name. Glass could recognize attendees for you and in doing so enhance the conference experience and lower the barrier to networking (versus pointing your smartphone at the face of an attendee to achieve the same).
Glass could help with its context-aware, hand-free information lookup to identify objects in your environment.
- If you are an avid mushroom hunter, Glass could be indispensable in identifying if one of your priced discoveries it tasty and edible or not.
- If you are hunting for your next house, Glass could provide you information about the neighborhood right there based on a real-estate database such as Zillow. No cumbersome map-based research after coming home.
- If you are at a used car auction or a yard sale hunting for bargains, Glass could offer you value estimates and supporting information based on eBays database of past person-to-person transactions. No more buyer’s remorse.
An app would understand the tasks that need to be accomplishes, identify the things that are relevant in Glass video and audio stream, map the things against a database of information, and display the most relevant information to the user. That would be cool.
Many objects in our environment are in plain sight to us but are obscured by a noisy environment and are only found by accident if at all. Serendipitous discovery describes a process in the background that facilitates chance encounters with objects that have location information.
Glass is the ideal device to create serendipitous encounters with objects in our environment based on context such as location, motion, audio, video etc. Scenarios are the discovery of a historic place, of new products while shopping, an interesting person while strolling around etc.
Serendipitous encounters require a new type of personal recommendation that take into account not only a persons history of interests and encounters but also introduces a form of randomness. The challenges to develop such a system should be obvious; too much distraction or irrelevant recommendations will cause frustration and discontinued usage.
Another interesting app for Glass is to form groups and to facilitate coordination using hyper awareness – the awareness created by real-time communication across different locations.
Such micro coordination would allow a group to form spontaneously and converge using Glass as the application that coordinates time, place and details of and event in a fluid and non-interrupting way.
The Glass app would inferred context from a users device such as location, activity, audio, video, etc. and automatically shares it with others in a group.
The key would be the proper use of the HUD to update each member of the group in a moment-to-moment fashion about current context and location convergence without interrupting ongoing social activities.