AR Steals the Show at WWDC

Virtual tape measures, Memoji’s, Tongue Detection, and more were all part of Apple’s AR updates announced at this years WWDC (Worldwide Developer Conference). As part of ARKit 2, developers will be able to upgrade their experiences to include improved face tracking, realistic rendering, 3D object detection, persistent experiences and shared experiences.

The first few of those updates allow for greater looking experiences, whereas the latter completely shift the paradigm of what is possible in an AR environment. Multiplayer experiences will be big for gaming, allowing multiple users to interact with each other in their own synthetic layer of reality. Apple will surely aim to bring this functionality to other aspects of their software – one can only imagine shared data visualization, product demonstrations, educational experiences, and much more.

tumblr_inline_p9tv3cFQfV1uk4gsu_540.png

Persistent experiences will turn AR applications into something that was once a one off use case, into an environment that offers an incentive to return. An easy way to visualize this is that if you were to hang up a virtual painting one day, you can return the next and that painting will still be there. Unless of course, your son or daughter went into that same virtual environment and decided to draw all over it.

Memojis are Apple’s way of personalizing yourself, just as Samsung, and many multiplayer games have done before them. However, in this case, Apple is allowing you to bring your Memoji into the camera – overlaying your virtual avatar onto your physical body. It’s somewhat of a snapchat like effect, which should lead to more native sharing within the messages app.

tumblr_inline_p9tv3ttAt61uk4gsu_540.png

With all these AR updates, Apple needed to make it easier for creatives to develop content for them. They did this through a new file format for AR entitled USDZ. In partnership with Pixar, this new file format will make it easier to create and share AR concepts. What’s better, Adobe will be integrating USDZ support into its suite of creative cloud applications. Developers will be able to natively edit AR designs and objects within software that they already know and love.

To cap it off, Apple made a few nods to web based AR integrations, which will entirely change the landscape of the consumption of AR content. Within Safari, virtual products will soon be instantly viewable in a users physical environment, and will eventually evolve into full on AR experiences, completely negating the need for one off apps and reducing the friction involved in accessing them. AR has long been hyped as the next great computing platform, and while we’re still not there yet, Apple has certainly provided us with some exciting updates as we work towards that day.

Implications

For brands and marketers, AR can seem like a shiny object that may look cool, but not necessarily bring value. As more and more users are given native ability to access an augmented world, and developers are given more tools to create objects that inhabit that world, that will no longer be the case.

It will soon be table stakes for ecommerce sites to offer AR viewability of their products and experiences will be inherently personalized as live data is brought into the experience. AR is an extremely exciting category, and will soon be an integral part of digital experiences, as computing moves from the mobile phone into spatial existence.

At Google I/O, AI Takes the Forefront

The main takeaway from this year’s Google I/O was the company’s mission to embrace AI at the forefront of its apps and services. In order to focus on their core mission of effectively organizing the world’s information, Google is putting the power of machine learning into the hands of the masses.

Democratizing AI

While Google certainly had a few exciting product announcements, the revelation of a cloud computing service that provides access to a powerful AI chip, called a TPU, may prove to be the most disruptive. Once the service launches towards the end of the year, any business or developer can access Google’s data centers and build software with TensorFlow (Google’s open source software to run neural nets) utilizing these new processors. These TPU’s will be able to run and train neural nets; greatly reducing the amount of time and input needed to create applications to assist with image/speech recognition, translation, robotics, and much more. Additionally, Google announced Assistant.SDK, which will allow manufacturers to build the Google Assistant into their devices.

1-MhK-osrRKcadCurWvTmckw
Googles TensorFlow Accelerator for AI

Putting this power into the hands of individuals and third parties will help to drive the trend of AI based applications and further hone in on what the major use cases of this technology will be. As always, empowering consumers to create the products they need is a huge driver of large-scale adoption.

Google’s Assistant at the Center of its’ Home

Google announced a number of internal AI improvements to its Home and Assistant that may have a profound shift in how we connect and explore. Lately, there has been a lot of talk on how our fundamental interactions with computers are changing. Namely, voice and vision have quickly become some of the primary ways that we interact with our devices. Google is using its Home, and conversely, its Google Assistant to drive this trend forward.

The goal of the Google Assistant, or any digital assistant for that matter, is to simplify all the technology in your life and help to get things done. The interactions with these assistants must be natural, and they should be able to occur where most convenient for you. That’s why, on top of its natural language processing (NLP) capabilities, Google is offering the ability to directly type to the Assistant for those instances where you’re just not comfortable talking out loud. Additionally, iOS users will now be able to download a Google Assistant app, in what appears to be a direct shot at Siri usage. For consumers who live in the Google-verse, but love the simplicity of an iPhone, this may completely change how you access mobile computing.

Google did not downplay the importance of a shift from reactive assistance to proactive assistance, which has been recently demonstrated by features in Facebook’s M virtual assistant. Google touched on this today by announcing that proactive assistance will be coming to the Google Home. Not only will the Assistant know what’s on your calendar, but for example, it will analyze traffic before the time of your events, and notify you if it thinks you should leave earlier or not. It’s all about getting out in front of the consumer, before they even think they need assistance.

Assistant vs. Echo

In an effort to keep up with Amazon’s progress, Google opened its Assistant to now support payments. While Alexa’s capabilities are relegated to Amazon products, Google is able to open it up to the market. Using the Google Assistant, consumers can now order food, products, and services without installing an individual app, setting up an account, or reentering payment information. As a brand, this reduction in friction is very valuable to new customer acquisition and retention. If your company is primarily consumer facing, it’s imperative for your customers to be able to reach and interact with you on this platform and in that way.

1-aRQyel-w9gagwrzzT9-cDw
Google Home

Last week, we were introduced to the Echo Show, Amazon’s version of a Voice + Visual interface. Google recognized that visual is an important complement to its Assistant, but has opted to use consumers current screens instead of giving them a new one. In practice, one may be interacting with their Google Home, and then getting visual responses directly to their phone or Chromecast enabled television. Other examples include directions sent to your phone, in-depth weather to your TV, voice activated Netflix/YouTube playback, and so much more. Voice + Visual is a huge part of how we as consumers will interact with our devices in the future.

A last note on this comparison, is the announcement that Google’s Assistant will now be able to call any mobile or land line in the U.S or Canada. Users have the option to link their previous number, or go private. The Home can recognize multiple voices, so if my Dad asks it to call ‘Mom’, it will call his mother (my Grandma). However, if I ask it to call ‘Mom’, it will call, well, my Mom. For those that said the home phone is dead, this is the second home-based communication device we’ve seen in the past two weeks. It’ll be interesting to see if consumers find the ease of use in these applications to be great enough to drive large-scale adoption.

Computer Vision

When it comes to image recognition, Google claims that its neural nets are now more accurate than humans. It’s backing it up with the release of Google Lens. First rolling out in the Assistant and Photos, Google Lens is a set of vision based computing capabilities that allow users to get on demand location-based information just by pointing the camera at an object, wifi router, restaurant, menu, brochure, and much more. After taking the shot, Google Lens works with Photos to store and identify information in the pictures.

Essentially, what this boils down to is that Google Lens is an entirely new way to activate Google’s primary business function; search. Take a picture of a restaurant, and immediately get reviews, hours, a menu, and even reservation options. Take a picture of a sign in a different language, and Google will translate it on the spot, and then allow you to pick the conversation up on the Google Assistant. For a company that was built on helping people discover information, Google just made it that much easier to get relevant info on what’s around you in your daily life.

1-o9OOKYE97rzMq4Yww4gzhw
Google Lens

In the grand scheme of things, all of these AI and Vision based technologies are building up to an always-on contextually relevant AR platform. Today’s announcements in positional tracking for Standalone VR are just the beginning of what we’re going to be able to accomplish in the immersive experience space. Eventually, by combining the power of visual and language processing, with Google’s immense treasure trove of consumer data, it wouldn’t be a stretch to think that Google Glass 2.0 might have the ability to do what it originally promised. However, a name change may be in the cards.