A year ago, Google announced its Assistant personal digital assistant technology and the Assistant-powered Home appliance. This year, both products are being significantly upgraded.
Both products are key to Google’s version of this “ambient computing” future I’ve been discussing lately, where you can interact with technology using normal conversations. For this to work, Google Assistant needs to be available everywhere you are, so Home is a key part of the strategy, too, since you can place these devices around your home.
Google Assistant is a no-brainer, of course, but a year ago I felt that Google Home would like likewise be quite successful: After all, Google already dominates mobile computing and consumer cloud services. That didn’t happen: The initial version of this product seemed incomplete when it finally shipped in late 2016. And I returned mine without even opening the box.
But credit Google with quickly improving Assistant and Home with new skills and capabilities. Even before Google I/O, these products had improved in major ways; in fact, Google added over 50 new features since the product launched. But Google also put Home and Assistant front and center at this week’s I/O conference. And things have improved dramatically yet again.
During the Google I/O keynote address, CEO Sundar Pichai revealed that the company’s speech recognition error rate has fallen below 5 percent, and this technology has gotten so good, Google Home can now support multiple users, providing a personalized experience for everyone.
Later in the keynote, Google VP of engineering Scott Huffman provided a lot more detail, noting that Assistant is now available on over 100 million devices (most of which are phones, of course). And 70 percent of the queries that Assistant handles are now natural conversations, and not just keyword searches, as is more typical with Google Search. And we see with other personal digital assistants, like Cortana, many queries are follow-ups, which continue a conversation.
Huffman walked through a number of new features coming to Google Assistant this year. Among them:
Support for typed queries. One of the issues with conversational computing, of course, is that it’s not always comfortable or possible to speak out loud. So Google is adding the ability to type to Assistant on the phone.
Google Lens support. Assistant will also work with Google Lens so it can have a conversation about what you’re looking at. In a neat demo, Huffman showed how Assistant
Available on iPhone. In keeping with Google’s desire to making Assistant more ambient—that is, available everywhere—the firm has ported the solution to the iPhone.
Available everywhere (coming soon). And speaking of “everywhere,” Google is going to help others bring Assistant to any product via a new Google Assistant SDK. “Speakers, toys, drink-mixing robots,” whatever. Sony, JBL, Panasonic, LG, Anker, Polk, Bang & Olufsen, and many others have already signed up. Hey, Sonos. Seriously.
Support for many more languages. And here is the area where I really hope Microsoft is paying attention: Starting this summer, Assistant will roll out in French, German, Brazilian Portuguese, and Japanese on Android and iPhone. By the end of 2017, Assistant will also support Italian, Spanish, and Korean.
More skills in more places. Like Amazon Alexa, Google Assistant is extensible with third-party skills—which Google calls actions—that dramatically enhance this solution over time. So there are obviously more skill becoming available all the time, but the big news here is that these skills are now available on Android and iPhone in addition to the Home appliance.
Transactions. Google is rolling out what it calls a complete system for transactions so that you will be able to buy things from Assistant now too. In a demo, the presenter used Assistant to set up a food delivery from Panera using an interesting conversational transaction.
Home automation integration. Assistant already supports smart home devices, and now there are over 70 companies that have signed on to integrate their products with this technology.
From here, Google moved on, naturally, to Google Home, its own home-based smart device. And as with Assistant, it had some interesting announcements about new Google Home features and functionality. These include:
Broader availability. This summer, Google Home will become available in Canada, Australia, France, Germany, and Japan. (It’s in the US and UK now.)
Proactive assistance. Starting this summer, Google Home will automatically notify you when something important happens. These cues can be visual—a light indicator on the device to prompt you to ask what’s up—but will expand over time.
Hands-free calling. As with Amazon Echo, Google Home will soon support free hands-free calling, offering an interesting replacement for a land line. It will even link your mobile number to your smartphone so calls are identified correctly.
More entertainment. Spotify is already available on Google Home if you have a paid subscription, but now the free version of the service is exclusively available on Google’s device. Google is also adding support for Soundcloud and Deezer and, more important, is bringing Bluetooth support to all existing Home devices. That means you will be able to play any audio from your Android handset or iPhone. Google is also adding additional partners like HBO NOW, Hulu, HGTV, and many others, so you can direct Chromecast-based playback from the Home device.
Visual responses. Google Home doesn’t have a screen, so Google is integrating its hands-free functionality with the smartphone screens we already have. So now you will be able to see visual results on that screen—or on a tablet or your TV, using a Chromecast—that accompanies the voice responses to your conversations. One great example: You can ask for directions and the results will appear on your phone screen.
This stuff is super-impressive. And it really highlights the danger of Microsoft’s wait-and-see approach to this technology