At this point, at the very least, we already know that Google’s augmented reality glasses are capable of snapping a photo. However, we do not have much of an idea of how the UI might work other than what is in the initial concept video. Our sources previously indicated that Google was using a “head tilting-to scroll and click” for navigation of the user interface. Today, we get a look out how the company is experimenting with alternative methods of input for the glasses from a patent recently granted by the United States Patent & Trademark Office and detailed by PatentBolt.
According to the report, the highlight of the patent is how Google’s glasses could work with hand gestures. The patent described various hand-wearable markers, such as a ring, invisible tattoo, or a woman’s fingernail, which could be detected by the glasses’ IR camera, to “track position and motion of the hand-wearable item within a FOV of the HMD.” In other words, the wearable marker, in whatever form factor, would allow the glasses to pick up hand gestures. The report also noted multiple markers could be used to perform complex gestures involving several fingers or both hands:
Google states that various functions and applications, as well as various forms of user input and sensory data from ancillary wearable computing components could provide rich and varied experiences and utility for a wearer of the HMD… Recognition of a known pattern of motion could accordingly be used to identify a known hand gesture, which in turn could form a basis for user input to the HMD. For example, a particular gesture could be associated with a particular command, application, or other invokable action on the HMD
Other takeaways from the patent include: a small touch sensitive button or touch pad on the side of the glasses; support for Wi-Fi and 3G/4G cell networks; sensors including gyroscopes, accelerometers, GPS chips, magnetometers; and, orientation sensors like a theodolite.
- Google patents design for Project Glass(es) (9to5google.com)