The next iteration of Google Glass is already in the works, but not much information has surfaced thus far about what the device’s hardware will be like. Google has given much of its focus and attention to the Glass at Work program over the last couple of years, and it’s no secret that specific work applications have been where the device has found its best use cases, but what will that mean for the direction that Google takes with the device’s hardware in the future?
A newly-published patent might give us an idea, and it might involve a new way to get information from the wearable display device based on where you’re looking.
There’s nothing overwhelmingly groundbreaking about it, but eye-tracking technology is definitely something that Google might be considering for the next version of Glass. Having to control Glass with voice and tap gestures can be cumbersome for a device that’s supposed to get out of the way and make its wearers lives easier, and eye-tracking might be just what Glass needs to make a wearable heads-up display practical in many situations.
It’s what I hoped for when Google first showed off Project Glass:
Imagine being able to walk down the street, glance at a restaurant that you’re walking by, and have Glass immediately provide you with quick heads-up information about the location. Yelp reviews, phone numbers, and breakfast menus could be a glance away, and Glass eye-tracking could make it easier to get that information. You wouldn’t have to tap or speak; all you would have to do is take a look.
This is exactly the technology that Google has claimed in US patent 9,001,030, published on April 7th 2015. The system uses a camera embedded in the device itself to take photos of the user’s eye via reflective prisms. The device would have two illumination sources that light up the eye in certain ways (136 above), and then the image captured would be used to determine where the eye is looking in the background. The current version of Glass can see very subtle eye gestures like winks, but this patent wants to take that tech to a whole different level.
The patent details three different “paths”. The first is the display path, which demonstrates how the eye will see what is being projected (like the Google Glass display), the second is the ambient path, which shows from where the ambient (background) light comes from, and the last is the built-in eye-tracking. As you can see detailed above, the eye-tracking path comes from a camera that’s embedded in the device itself (124), and uses the reflective prisms—the same reflective prisms that are used to show the display—to take a photo of the eye.
Google bashes the current eye-tracking technologies in this patent, saying that they cause unnecessary bulk:
In some uses of a heads-up display, it can be useful to know what part of the scene a user is viewing. One way to accomplish this is through eye-tracking technology, but existing eye-tracking technologies have some disadvantages. Among other things, existing eye-tracking technologies use an optical path separate from the optical path used for the display, making the heads-up display more bulky and complex and less streamlined.
And Google then goes on to detail later in the patent how the eye-tracking technology will use images of the user’s eye to determine what part of the scene they’re looking at:
As the user sees ambient light from scene, camera captures one or more images of the user’s eye. In an embodiment with a secondary camera that captures an image of scene, computer can use the eye tracking data and scene images to tell what part of scene the user is focused on, and can use the additional data, such as the user’s location established via GPS, for example, to provide information to the user about the scene they are looking at.
Interestingly, the patent also shows a device that has two heads-up displays, rather than the current generation model’s single display in front of the right eye. There’s no real evidence to suggest that this is what the company plans to do with the next version of Glass, but it’s interesting nonetheless.
Many problems with this configuration—namely safety—have been noted over the years, so I personally don’t expect that Google will solve those problems by the time the second generation device is announced. That said, there’s definitely a possibility that Google plans to take a more virtual reality-esque approach with the upcoming iteration, which would mean dual-displays would make more sense.
Patents are always tricky, because corporations can and do patent endless technologies that they never use. But once in a while, a newly published patent might just give us an idea of where a company might be headed with one of its products.
Google says that they’re planning to introduce the next generation of Glass before the end of the year, but they want to make sure that it’s going to be a marketable and finished product first. Google is done testing the device out in the the open, so for now, these patents are the only clue we really have as to where the Mountain View company aims to go with the future of its heads-up wearable display.