A big difference between the Alexa ecosystem and that of Google’s Assistant, is how skills and in Google’s case, actions, are accessed. Google doesn’t require you to enable a specific action or even invoke that specific action for the request to be completed – simply ask it a question and it will deeplink to it’s database to see if there is a supporting intent.
Today, reports are coming out that Alexa will now recommend skills to it’s users if it finds that it cannot complete a specific action. This means that we’re inching closer to the day when we no longer have to hear something along the lines of “Hmm, I don’t know that” or “I’m still learning”, as Alexa seems to be opening the door to learn from third party directed information.
Long term, Amazon is moving toward a decentralized intelligence model. Users will be empowered to train their devices in ways that make sense to them, not only how Amazon envisions it. If Alexa can learn and recommend third party skills, it will also be able to get a better sense of how we use those skills. In turn, Alexa will eventually offer predictive assistance, understanding when it should become alert, before we even know we need it.
With over 15,000 skills and counting, one of the biggest challenges for brands and developers, is discovery. Previously, that meant discovery through search and browsing of the Amazon skill store. With today’s announcements, we now understand that importance developers and designers will have in optimizing skill titles and descriptions so Alexa can better understand when to recommend that skill.
When creating a skill, it’s now important to test various requests users might try that correspond to your skill. Doing so will help to tell if Alexa is either already recommending a skill or attempting to answer the question on its own. A thorough audit like this is a great way for brands to find white space in the Amazon Voice Ecosystem.