At Google I/O, AI Takes the Forefront

The main takeaway from this year’s Google I/O was the company’s mission to embrace AI at the forefront of its apps and services. In order to focus on their core mission of effectively organizing the world’s information, Google is putting the power of machine learning into the hands of the masses.

Democratizing AI

While Google certainly had a few exciting product announcements, the revelation of a cloud computing service that provides access to a powerful AI chip, called a TPU, may prove to be the most disruptive. Once the service launches towards the end of the year, any business or developer can access Google’s data centers and build software with TensorFlow (Google’s open source software to run neural nets) utilizing these new processors. These TPU’s will be able to run and train neural nets; greatly reducing the amount of time and input needed to create applications to assist with image/speech recognition, translation, robotics, and much more. Additionally, Google announced Assistant.SDK, which will allow manufacturers to build the Google Assistant into their devices.

1-MhK-osrRKcadCurWvTmckw
Googles TensorFlow Accelerator for AI

Putting this power into the hands of individuals and third parties will help to drive the trend of AI based applications and further hone in on what the major use cases of this technology will be. As always, empowering consumers to create the products they need is a huge driver of large-scale adoption.

Google’s Assistant at the Center of its’ Home

Google announced a number of internal AI improvements to its Home and Assistant that may have a profound shift in how we connect and explore. Lately, there has been a lot of talk on how our fundamental interactions with computers are changing. Namely, voice and vision have quickly become some of the primary ways that we interact with our devices. Google is using its Home, and conversely, its Google Assistant to drive this trend forward.

The goal of the Google Assistant, or any digital assistant for that matter, is to simplify all the technology in your life and help to get things done. The interactions with these assistants must be natural, and they should be able to occur where most convenient for you. That’s why, on top of its natural language processing (NLP) capabilities, Google is offering the ability to directly type to the Assistant for those instances where you’re just not comfortable talking out loud. Additionally, iOS users will now be able to download a Google Assistant app, in what appears to be a direct shot at Siri usage. For consumers who live in the Google-verse, but love the simplicity of an iPhone, this may completely change how you access mobile computing.

Google did not downplay the importance of a shift from reactive assistance to proactive assistance, which has been recently demonstrated by features in Facebook’s M virtual assistant. Google touched on this today by announcing that proactive assistance will be coming to the Google Home. Not only will the Assistant know what’s on your calendar, but for example, it will analyze traffic before the time of your events, and notify you if it thinks you should leave earlier or not. It’s all about getting out in front of the consumer, before they even think they need assistance.

Assistant vs. Echo

In an effort to keep up with Amazon’s progress, Google opened its Assistant to now support payments. While Alexa’s capabilities are relegated to Amazon products, Google is able to open it up to the market. Using the Google Assistant, consumers can now order food, products, and services without installing an individual app, setting up an account, or reentering payment information. As a brand, this reduction in friction is very valuable to new customer acquisition and retention. If your company is primarily consumer facing, it’s imperative for your customers to be able to reach and interact with you on this platform and in that way.

1-aRQyel-w9gagwrzzT9-cDw
Google Home

Last week, we were introduced to the Echo Show, Amazon’s version of a Voice + Visual interface. Google recognized that visual is an important complement to its Assistant, but has opted to use consumers current screens instead of giving them a new one. In practice, one may be interacting with their Google Home, and then getting visual responses directly to their phone or Chromecast enabled television. Other examples include directions sent to your phone, in-depth weather to your TV, voice activated Netflix/YouTube playback, and so much more. Voice + Visual is a huge part of how we as consumers will interact with our devices in the future.

A last note on this comparison, is the announcement that Google’s Assistant will now be able to call any mobile or land line in the U.S or Canada. Users have the option to link their previous number, or go private. The Home can recognize multiple voices, so if my Dad asks it to call ‘Mom’, it will call his mother (my Grandma). However, if I ask it to call ‘Mom’, it will call, well, my Mom. For those that said the home phone is dead, this is the second home-based communication device we’ve seen in the past two weeks. It’ll be interesting to see if consumers find the ease of use in these applications to be great enough to drive large-scale adoption.

Computer Vision

When it comes to image recognition, Google claims that its neural nets are now more accurate than humans. It’s backing it up with the release of Google Lens. First rolling out in the Assistant and Photos, Google Lens is a set of vision based computing capabilities that allow users to get on demand location-based information just by pointing the camera at an object, wifi router, restaurant, menu, brochure, and much more. After taking the shot, Google Lens works with Photos to store and identify information in the pictures.

Essentially, what this boils down to is that Google Lens is an entirely new way to activate Google’s primary business function; search. Take a picture of a restaurant, and immediately get reviews, hours, a menu, and even reservation options. Take a picture of a sign in a different language, and Google will translate it on the spot, and then allow you to pick the conversation up on the Google Assistant. For a company that was built on helping people discover information, Google just made it that much easier to get relevant info on what’s around you in your daily life.

1-o9OOKYE97rzMq4Yww4gzhw
Google Lens

In the grand scheme of things, all of these AI and Vision based technologies are building up to an always-on contextually relevant AR platform. Today’s announcements in positional tracking for Standalone VR are just the beginning of what we’re going to be able to accomplish in the immersive experience space. Eventually, by combining the power of visual and language processing, with Google’s immense treasure trove of consumer data, it wouldn’t be a stretch to think that Google Glass 2.0 might have the ability to do what it originally promised. However, a name change may be in the cards.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s