Occasionally we see technology being demonstrated which makes it very clear we are living in the future. One such example is technology being developed by the human interaction researchers at Microsoft research.
A recent patent “THREE-DIMENSIONAL OBJECT TRACKING TO AUGMENT DISPLAY AREA” would have a computer use a camera to track the movement of fingers or styli on the area in front of the device, thereby allowing interactions on a larger surface area than the device itself would allow.
The abstract reads:
In some examples, a surface, such as a desktop, in front or around a portable electronic device may be used as a relatively large surface for interacting with the portable electronic device, which typically has a small display screen. A user may write or draw on the surface using any object such as a finger, pen, or stylus. The surface may also be used to simulate a partial or full size keyboard. The use of a camera to sense the three-dimensional (3D) location or motion of the object may enable use of above-the-surface gestures, entry of directionality, and capture of real objects into a document being processed or stored by the electronic device. One or more objects may be used to manipulate elements displayed by the portable electronic device.
The weakness of that technology is of course that only areas directly in front of the device can be tracked.
Microsoft Research Senior Researcher Eyal Ofek has however found a way to overcome that issue, by astoundingly tracking the reflections of your finger in the lenses of standard dark sunglasses.
It would use the standard front-facing camera of the device and would need to be calibrated for the curvature of the glasses, but could even allow tracking of gestures in mid-air.
We present a novel approach for extending the input space around unmodified mobile devices. Using built-in front-facing cameras of unmodified handheld devices, GlassHands estimates hand poses and gestures through reflections in sunglasses, ski goggles or visors. Thereby, GlassHands creates an enlarged input space, rivaling input reach on large touch displays. We introduce the idea along with its technical concept and implementation. We demonstrate the feasibility and potential of our proposed approach in several application scenarios, such as map browsing or drawing using a set of interaction techniques previously possible only with modified mobile devices or on large touch displays. Our research is backed up with a user study.
See that technology, called GlassHands demonstrated in the video below.
We may of course never see something like that in an actual product, but the mere fact that it is possible does give one shivers.
Read more about GlassHands at Microsoft Research here.