Google this week introduced a new set of Google Lens technologies that can visually identify objects in the real world.
Google CEO Sundar Pichai explained during this week’s I/O keynote how the firm has made major strides in computer vision technology, and that these advances would impact a variety of its products and services. Google Photos is an obvious example, and this app will soon be able to remove unwanted elements in photos in a way that requires experience with powerful tools like Photoshop.
But as impressive, Google has also announced a new “initiative” called Google Lens. Yes, I know this sounds like Office Lens at first, both because of the name and its vision-based functionality. But Google Lens is actually a lot more like Samsung’s Bixby Vision, and provides general purpose visual recognition. (It’s also sort of a successor to Google Goggles.)
“Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information,” Pichai explained. “We’ll ship it first in Google Assistant and Photos, and it will come to other products.”
In a demo, Pichai showed how Lens might work: You point your phone’s camera at an object, and Google Lens will tell you what it is. In the demo, he used a flower, and Google Assistant explained what kind of flower it was. But another demo got an even bigger cheer: You can point the camera at the sign-in information on the back of a wireless router and Google Assistant will recognize the SSID and password and then actually sign the device into the network.
More impressive to me is Google Lens’s ability to identify locations like stores out in the world: You will be able to point your phone at a store or other location and a Google Maps card for that location will pop-up, letting you learn more instantly.
Google Lens capabilities will be added to Assistant and Home “in the coming months,” Google says.