What is Google Lens and will it be useful?

Google is wrapping things up over at Google I/O 2017, and we wanted to create a separate post detailing more specifically one of the new features Google is to begin integrating across some its popular services. Towards the beginning of Google’s keynote, the tech giant’s CEO, Sundar Pichai, announced a new initiative called Google Lens.

Google I/O 2017: Event Wrap-up and all you need to know
Google’s VR and AR developments were discussed at I/O 2017
What is Google Lens and will it be useful?
Google I/O 2017 + Weekly News Recap: Here are all the major headlines from the week 5/15 – 5/19
Google announces the winners to this year’s Google Play Awards at I/O 2017!
Google Home gets Proactive Assistance, Hands-Free Calling, and Visual Responses at Google IO 2017
Here’s what is new in YouTube and Android TV
Google shows off some of the major new features in Android O
Google for Jobs announced at Google I/O 2017
Google announces ‘Android Go’ for the next billion users
Google Assistant gains photo context, mobile payments, and third-party development
New features for Google Photos debuted at Google I/O 2017
Google introduces Android O Beta Program with Developer Preview 2 at I/O 2017
Watch the official replay of Google’s I/O 2017 Developer Conference right here
Here are all the official images and videos from Google’s I/O 2017 Developer Conference

 

As described by Sundar Pichai, Google Lens “is a set of vision based computing capabilities” that will provide users with information about what they’re looking at. The functionality takes advantage of years worth of deep learning that has improved visual recognition. Using a device’s camera, one will be able to point and be presented with information in regard to what’s recognized. The technology will also collect bits of information such as location to help provide more accurate results. By knowing where you are, Google suggests that Google Lens will be more precise in determine what’s in the image.

The technology is coming soon, and will be available across several of Google’s platforms starting with the assistant and Google Photos. So how does it work? Opening up the Google Assistant and tapping on the feature’s icon on your phone will open up the camera within the voice assistant window. All you’ll have to do is point the camera at the object you want to learn about and Google will automatically detect the subject shown and recognize what it is using image recognition and location data.

Google Lens will even suggest related information, if it properly discovers what’s being show, that’s based on what you’re looking at. So for example, it could provide a link to learn more about a particular flower on Wikipedia. Or if you point your phone at the front side of a restaurant, Google could bring up shortcuts leading to helpful search terms such as the restaurant’s menu, ratings and website. These are just a few examples of how the feature will work, and more actions will be supported as well such as adding calendar events, listening to recommend songs and more all without ever leaving the app. And in the future, this could be able to handle more complex tasks for you such as establishing a connection to a Wi-Fi network when its username and password is presented, automatically.

As you can tell from the above examples that Google mentioned in its developer conference, Google Lens is well aware of what surrounds you. It can also be contextually aware of what’s being visually shown. For example, not only could it detect what’s pictured, but an action that you may want to take with it. The same technology has been adopted in Google Photos. Two years ago, the technology became able to recognize one’s face and group all the photos of one person neatly together. Now, thanks to machine learning, Google Photos is also capable of determining emotion, location and more all in the same image. This is the driving force that has made Google Lens possible. Best of all, it doesn’t stop here. The more the service is used, the more Google Lens will be accustomed to, like the Google Assistant voice technology, meaning it will be able to perform faster, more accurately and in greater detail.

In a few months, Google Assistant will get a whole better through enhanced computer vision in the form of Google Lens. It will allow users to even have natural conversations and ask follow-up questions about what they’re looking at, as opposed to words being the input. Point the camera at a sign in a foreign language, Google Lens will instantly translate it for you through visual context. The primary purpose in doing so is to eliminate some of the hassle that’s involved in having to switch between apps and type everything out.

Google’s vision based computing capabilities found in Google Lens will also become more amid the company’s other services. Outside of the Google Assistant, Google Photos is to be one among the very first to embed the technology. In Google Photos, Google Lens will allow the user to discover more about images they’ve already taken, and take action. Assuming you’ve given Google permission to store the location of where photos were taken, Google Lens will be that more accurate. Very much like in Google Now On Tap, tapping the new Google Lens button that is arrive in Google Photos with a future update will scan an image presented on-screen and give more information about an object such as a building or painting. Google will even detect more than one subject in each image, which is nice for panoramas and other photo creations shot at a larger scale. Screenshots work too, and Google Lens is also aware of text inside images. Meaning, if there’s an address or phone number, Google Lens will pick it up and allow you to store its information on your phone all without saying anything or leaving the app.

In a nutshell, some of these features aren’t entirely new nor groundbreaking, but are merely creating simpler ways to get things done on the fly, and that’s what the purpose of AI is. Samsung’s virtual assistant, Bixby, already has a feature called Bixby Vision, which is similar to Google Lens in that it tries to detect objects expressed visually and provide information about them when you need it. Amazon had a similar technology back in the day, and there are quite a few third-party apps that have the same functions. Not to mention, Apple, who will likely be last to join in the AI party later this year. What we’re seeing out of Google has already blown past what competitors are offering in that Google Lens is simple, always connected and accessible on millions of devices already with no installation even required (on Android). The way that Google Lens performs is entirely different, and will improve with time and usage. Google also has plans to use the same functionality in other core services as well. We look forward to then, but for now, I’ll be patiently waiting for these features to roll out over the next few months. If you read the title of this post, the answer is yes, in our opinion, Google Lens will indeed be useful.

Like what you see? Check out some of our top stories down below and use the tags that act as shortcuts to our Google event coverage and insight. Help Droid Turf expand by sharing any of our pages using the embedded social buttons.

[See more of the latest Google I/O 2017 coverage]

SOURCE [Google Keyword]

About Doug Demagistris 1627 Articles
Doug Demagistris is the Founder and current Editor In Chief of Droid Turf. He grew up in New York and now attends Bryant University where he is studying marketing and communication. He has been and always will be a Google enthusiast thanks to Android’s customization, flat design and exceptional integration with various Google services. Currently, Doug uses a Pixel 2 XL as his daily driver for its unique design, powerful hardware, exceptional camera, and stock experience. For shorter instances, he’ll glance at his Huawei Watch. And for more productive work, you’ll find him typing away on his Pixelbook. Doug is hopeful his productivity will make lives easier, more meaningful and help down the road.