Google first introduced us to Project Soli last year as miniature radar hardware that allows gesture control of devices. Earlier this year, it somehow managed to squeeze the tech into a smartwatch. A research team from Scotland has now expanded Soli’s smarts, allowing the radar to identify objects as well as gestures, putting it into a device it calls RadarCat.
We have used the Soli sensor, along with our recognition software to train and classify different materials and objects, in real time, with very high accuracy […] Our studies include everyday objects and materials, transparent materials and different body parts.
While this work was previewed at Google I/O earlier this year, the team has now made the full paper available, together with a longer video, below …
The team said that while Soli needs to be taught to recognize each object, this is not a big a problem as you might think, as it explained to The Verge.
Professor Aaron Quigley compares it to music CDs: “When you first started using them, you put in the CD and it would come up with song list. That information wasn’t recorded on the CD, but held in a database in the cloud, with the fingerprint of the CD used to do the lookup.” Once the information has been introduced to the system once, says Quigley, it can be easily distributed and used by anyone. And the more information we have about various radar fingerprints, the more we can generalize and make inferences about never-before-seen objects.”
The system works by detecting not just the physical shape of an object, but also its internal structure and rear surface. This allows it to identify different materials, such as distinguishing steel from aluminum. It is sufficiently accurate that it can tell whether it is looking at the front or back of a smartphone, or whether a glass is empty or full.
You can watch the video below and download the full paper if you’d like to know more.
FTC: We use income earning auto affiliate links. More.
Comments