Search by Image allows users input an image, and then Google offers images and search results related to that image. Users select an image through the ‘ole drag-and-drop, and then uploading, or even inputting a URL. Meanwhile, the Knowledge Graph is new technology that allows Google to provide search results for concepts linked between words, rather than showing results for just the query term.
Software Engineer Sean O’Malley explained the inclusion on Google’s Inside Search blog today:
With the recent launch of the Knowledge Graph, Google is starting to understand the world the way people do. Instead of treating webpages as strings of letters like “dog” or “kitten,” we can understand the concepts behind these words. Search by Image now uses the Knowledge Graph: if you search with an image that we’re able to recognize, you may see an extra panel of information along with your normal search results so you can learn more. This could be a biography of a famous person, information about a plant or animal, or much more.
Google wants to improve its image search. When a user uploads an image of a specific type of flower, for instance, Google would previously give general flower search results. Now, Google will try to guess the exact type of flower. Google will also show the most recent content in search results, which is helpful for news images.
“Finding more information about an image is the most common use of Search by Image. Very often this information is found on websites that contain either your image or images that look like it,” O’Malley wrote. “We’ve made recent improvements to our freshness, so when photos of major news stories start appearing on the Internet, you can often find the news stories associated with those photos within minutes of the stories being posted. We’ve also expanded our index so you can find more sites that contain your image and information related to it.”
Search by Image with Knowledge Graph was touched upon at the Google I/O developers conference last week when Director of Google Apps Product Management Clay Bavor showed off Google Drive and Chrome OS for iOS and Android. The Mountain View, Calif.-based Company’s effort at improving Search by Image results also surfaced in a recent New York Times piece about Googlers who built a simulation of the human brain that identifies cats in YouTube videos.
Researchers created “one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.” More specifically, Google turned the “brain” to 10 million images found in YouTube videos about cats. The brain eventually constructed a digital patchwork of a cat by cropping general features from the millions of images that it identified. Google noted the method could eventually prove useful in image search, speech recognition, and language translation.
- Google explains what its Panda algorithm is looking for when it ranks search results (9to5google.com)
FTC: We use income earning auto affiliate links. More.