Deep Dive: The AI Innovations Across Google’s 2017 Devices Lineup

Posted on October 8, 2017 by Paul Thurrott in Android, Cloud, Mobile, Music + Videos, Smart Home with 52 Comments

Deep Dive: How Google's 2017 Devices Lineup Uses AI

This year, Google is taking a bite out of Apple.

I previously made the case that AI is how Google will win in devices. Now it’s time to get a lot more specific.

As you may recall, Google announced a new lineup of smartphones, Chromebooks, smart speakers, and other devices this past Wednesday. Many have criticized the new products as being responses to the competition, or just bland. This view is wrong-headed and incorrect. Instead, the search giant was upfront about these devices’ collective, AI-based advantages. And this differentiation, I feel, is key: Google will use AI to win the next wave of personal computing.

That’s quite a claim, I know. And we’ll need to wait and see how Google’s various products and services perform in the market before we’ll know the veracity of this opinion. But in the meantime, we can examine how Google is applying AI specifically to each of its newly announced devices. This is helpful, I think, to understand how serious Google is about using its core strength in the cloud to help advance its goals on the client. And to widen the gap between itself and Apple, and any other erstwhile competitors.

And there is a ton of information to look at here.

“[We] are radically rethinking how computing should work,” Google CEO Sundar Pichai said, opening the devices event. “In an AI-first world, computers will adapt to how people live their lives, rather than people having to adapt to computers.”

(Google is also using AI and machine learning to advance its core web services and mobile apps, of course. And many Google advances will make their way to third-party solutions via Android. Here, however, I’m focusing specifically on the “Made by Google” devices that the company just announced.)

“AI-first” allows people to interact with computers and other devices in a natural and seamless way, using conversations, gestures and vision. AI-first is also ambient, a term you’ve probably heard me use a lot in the past year or so. This means it is available to you everywhere, not just on a certain device. It is also contextual so that it understands you, your location, and your environment to give you the information you really need at the right time. And it is adaptive, learning and improving over time.

Here’s how Google is applying these techniques in its newly-announced products, which it correctly describes as “radically helpful.”

Google Home and Google Assistant

For 2017, the Google Home hardware lineup is expanding past the original device to include a smaller and cuter Google Home Mini as well as a bigger Google Home Max, with its (apparently) superior sound. These two new products are aimed at filling out the product line and hitting new price points and audio performance, respectively. They’re about style and warmth.

So there’s nothing uniquely AI about the new Home devices per se, other than the Smart Sound feature for Max that tailors the speaker’s sound to individual rooms and even to the content you’re listening to. (Apple is doing this too, with HomePod.) But given the nature of this product family, there’s a lot going on here, AI-wise.

In fact, Google Assistant and the Google Home smart speakers it drives are, perhaps, the most obvious example of how this firm is using its AI expertise to improve real world products. And in just its first year in the market, Google Home has improved at a scale that is almost hard to fathom: It can now answer over 100 million additional questions.

The interactions you have with this device, or with Google Assistant generally, are of course very natural: You just speak normally and, in many cases, engage in a conversation. And now you can do so in far more places: Google has also worked to bring Google Assistant and Home to more countries, and to more languages, over the past year.

“Now, bringing the Assistant to people all around the world is no easy task,” Google’s Rishi Chandra noted, as he described the firm’s decade-long work in this area. “We had to make sure we could understand people of different age groups, genders, and accents. So we trained the Assistant at a scale that only Google could, with over 50 million voice samples from hundreds of different ambient environments.”

Google Assistant now features the best voice recognition in the market. And, unique among the entries in this field, it can recognize individual voices. So when I ask Google Home for my schedule, it gives me my schedule, not my wife’s. And when she asks the device for her reminders, she gets her’s, not mine. This feature also works with voice-free calling, another feature the debuted first on Google Home: When you ask to call someone named “Paul,” it will be a Paul in your address book.

“An assistant can only truly be useful if it knows who you are,” Chandra said. And Google is the only assistant that offers that very important feature. It’s a huge differentiator.

And Google Assistant and the devices it powers are not standing still: They’re improving over time. Two key changes that just became available are tied to routines, which let the Assistant carry out multiple actions—e.g. tasks—with a single command.

So Google Assistant now supports more routines—including such things as coming home in the evening and going to bed—and more actions.

In an example provided by Chandra, you might create an action called “Good morning” that turns on the lights, starts the coffee maker, and fires up your daily briefing on the speaker(s) of your choice. Google Home has also picked up a “find my phone” feature that will ring your smartphone if you can’t find it. Just say, “OK, Google, find my phone.” (Yes, it works with that voice recognition functionality, ensuring that it will ring your phone, and not your wife’s.)

Google Assistant is also improving its support for smart home devices: It now supports over 1,000 different products from over 100 different companies. It can also interact with these devices more intelligently, letting you use simpler and more natural language commands like “make it warmer” (as opposed to setting a particular thermostat to a particular temperature). Google also talked up its Nest-branded smart home products at the event; see below.

Google Home is also picking up a new feature called Broadcast that lets you send audio messages to every Google Home device in your home. For example, “OK Google, broadcast that it’s time to leave for school.” And to further its usefulness for families, Google is integrating linked accounts for kids under 13 with Google Home. And it has improved the Assistant’s voice recognition to include children so that it can understand them too.

“We’re introducing over 50 new experiences with the Google Assistant to help kids learning something new,” Mr. Chandra explained. “Explore new interests, imagine with story time, share laughs with the whole family.” He then provided a few examples from his own family: “OK Google, play musical chairs,” “OK Google, beat-box me,” “OK Google, let’s play space trivia,” “OK Google, tell me a story,” and so on.

Yes, there will always be the complaint that Google’s technologies cross some line between useful and creepy, but that’s the point. This is an area where Apple is simply too sheepish to tread, and it doesn’t have the technical acumen to pull it off anyway. It’s as much a contributor to Google’s ongoing success as the actual technology.

But in this specific case, one can imagine complaints about Google raising our children or whatever other nonsense. As Chandra notes, though, these experiences take kids away from solo experiences attached to screens, and lets the interact with each other, and with parents, in a group. It’s healthier than giving a kid an iPhone.

Google is partnering with Disney to bring that firm’s many entertainment experiences—like Mickey Mouse and Star Wars—to Google Home. And more broadly, it is opening up Assistant actions so that any third party can bring their own family- and kid-based experiences to the platform as well.


Google-owned Nest is unsurprisingly upping its game when it comes to Google Assistant integration. Nest recently (and ahead of the Google event) shipped six new hardware products, each of which combines machine learning and modern, thoughtful hardware design.

Nest’s Yoky Matsuoka provided a few examples.

For example, using Nest Cam in tandem with Google Home and Chromecast, you can keep an eye on the security of your home using just your voice. A command like “OK Google, show me the entry way” will be received by Google Home and the video from the Nest Cam will be streamed to the Chromecast attached to your TV. (You can also save a clip of the Nest Cam stream with “OK Google, save this clip” or similar.)

The Nest Hello video doorbell, meanwhile will use Google’s facial recognition technologies to recognize people who are at the front door. So when the doorbell rings, it will broadcast through any Google Home devices and tell you who it is (if that person was recognized): “Aunty Suzie is at the front door.”

Finally, using the Google Assistant routine improvements I noted above, you can now include actions for Nest products too. So when you create a routine like “Goodnight,” it can include arming the home security system and turning on home monitoring cameras in addition to turning off lights, setting the thermostat, setting an alarm, reminding you about your first appointment the next day, and whatever else. Pretty impressive.


Google’s newest Chromebook, the Pixelbook, is a “4-in-1,” or a convertible PC, as we’d call it in the Windows world. And it’s interesting on a number of levels. But from an AI perspective, the Pixelbook provides one major leap forward for all laptop-kind: It is the first Chromebook with Google Assistant built-in. It even adds a dedicated Assistant key to the Chrome OS keyboard for the first time. That way, you can access Assistant by typing instead of speaking, something that may be more acceptable in laptop-style productivity situations.

That stuff is obvious. But Pixelbook also offers unique Assistant interactions via the optional Pixelbook Pen.

“Just hold the Pen’s button and circle an image or text on the screen, and the Assistant will take action,” Google’s Matt Vokoun explained. “When you’re browsing through a blog, discovering a new musician, you can circle their photo, and the Assistant will give you more information about them. From there, you can go to their Instagram page, their YouTube channel, listen to their songs, and more.”

As with the little-used Cortana integration in Microsoft Edge on Windows 10, the Assistant can also be used to do research: Circle a word and get a definition and other information.

Pixel and Pixel XL

Google’s latest smartphone push rightfully received a lot of attention this week. But the big news, of course, was how the search giant will use AI to differentiate these products from what Apple, Samsung, and others sell.

“The playing field for hardware components is leveling off,” Google’s Rick Osterloh explained. “Smartphones [have] very similar specs: Megapixels in the camera, processor speed, modem throughput, battery life, display quality. These core features are table stakes now. Moore’s Law and Dennard scaling are ideas from the past. It’s going to be harder and harder for [companies] to develop exciting new products each year because that’s no longer the timetable for big leaps forward in hardware alone. And that’s why we’re taking a very different approach at Google.”

He then reiterated the company mantra that “the next big innovation will occur at the intersection of AI, software, and hardware. So while smartphones can reach specs parity, Google’s devices will always have the edge because of the unique AI-based advances that it alone can deliver to users at scale.

The first generation Pixel handsets were the first smartphones to include Google Assistant. But they also revolutionized the end-to-end photos experience for users, thanks to a superior (in fact, best in market) camera with automatic HDR and video smoothing, free cloud-based storage for full-sized photos taken with the device, and a simple and elegant Photos app and service with instant search and an ever-growing list of features.

For Pixel 2, Google has done whatever it’s done to make what it feels is a competitive device. For this discussion, of course, what I’m concerned with is the AI-based innovations only. And there are a number of items, of course, above and beyond the obvious advancements to Google Assistant like broadcast and the new routines and actions noted previously.

The first, however, is related to Google Assistant: On Pixel 2, you can squeeze the device as you hold it to more easily (and perhaps more naturally) summon the Assistant. There’s no need to say “OK, Google.”

The new Pixels include an integrated Shazam-like feature called Now Playing that is available from the always-on display: Just glance at the display, and you will see the name of the artist and the currently-playing song. Interestingly, this one uses on-device machine learning, and not a cloud service, which is a curiously Apple-like way of doing things. If you tap the song name on the display, Google Assistant fires up so you can learn more, add the song to a playlist in your preferred music service, or watch the video on YouTube.

Google is also bringing at-a-glance functionality to the Pixel 2 home screen, starting with calendar data. But commute and traffic information, flight status, and more are coming soon.

But the most startling AI-related advance related to the new Pixel 2s is an app called Google Lens. It will ship in preview form this fall on the Pixels and then will be made available to other Android devices in the future.

“Google Lens is a way to do more with what you see,” Google’s Aparna Chennapragada said during the devices presentation.

At a basic level, Google Lens works like other apps that try to understand the live world view that’s available via your smartphone’s camera. (For example, you can use Google Translate to view a menu in say, Japanese, and see a live translation on the display in a sort of augmented reality view.) But Google Lens, of course, goes much further.

In a demo, Chennapragada showed how Google Lens could read phone numbers and email addresses from a flyer, which is useful. But it can also be used to call that number or email that address. It also works for mapping to physical addresses.

In another demo, Google Lens was used to identify the artist behind a print hanging framed on the wall. “Now you can just Lens it,” she said. She then used Google Lens to identify and learn more about a movie, a book, an album, and, most impressively, a Japanese temple in a personal photo from a trip from 5 years ago.

“There are a lot of things happening under the hood, all coming together,” Chennapragada said.

Thanks to major breakthroughs in deep learning and vision systems, Google Lens can work in tandem with millions and millions of items stored by Google Search to understand what you’re looking at. And Google’s Knowledge Graph, with its billions of facts about people, places, and things, is called on to more information. This is exactly the type of thing that only Google can do this effectively. And while it is still early days on visual recognition, Google’s track record on general search and voice recognition is well-established.

Google also uses AI to help improve the Pixel 2 cameras, as it did with the previous-generation devices. For this generation, the firm is adding a Portrait Mode feature that requires only a single camera lens—most smartphones need two to do this—to separate the subject from the background and create a compelling bokeh effect. The firm used over a million photos to train the machine learning algorithms that make this functionality possible, Google’s Mario Queiroz said. Also, Portrait Mode works on both cameras, unlike with other smartphones.

Accessories: Pixel Buds and Clips

While many of Google’s announcements this past few were ruined by leaked, two were not. Both were for devices that are accessories for Pixel or other Android-based smartphones.

The first is a new pair of wireless headphones called Google Pixel Buds. Which work like many other wireless headphones, of course. But with two wrinkles.

“When you pair your Pixel Buds to your Pixel 2, you get instant access to the Google Assistant,” Google’s Juston Payne noted. This enables voice control of various features like playing music, sending a text, or getting walking directions. “All while keeping your phone in your pocket,” he added. “It can also alert you to new notifications and read you your messages.”

And then he dropped the bomb. This is inarguably the most impressive thing that Google announced that day.

“Google Pixel Buds even give you access to a new, real-time language translation experience,” he said. “It’s an incredible application of Google Translate powered by machine learning. It’s like having a personal translator by your side.”

The live demo of this functionality is incredible to watch: Payne speaks in English to a Pixel Buds-wearing Swedish speaker. The Buds translate his speech into Swedish so she can understand it, and she then replies in Swedish. Her Pixel 2 smartphone speaks her words to Payne, translated to English. And as with the Babblefish from Hitchhiker’s Guide to the Galaxy—which, by the way is science-fiction—a real and natural conversation occurs. It’s incredible.

The Pixel Buds provide real-time language translation functionality in 40 different languages.

Finally, Google also showed off a new camera accessory called Google Clips. It’s basically a mini GoPro-type device that you can place in a room or space, or clip onto a child or pet, and have spontaneous scenes automatically recorded for you. Now, you can be part of the moment, and not just a bystander or family historian.

Google Clips looks fun. But the big news is its use of AI.

“Google Clips starts with an AI engine at the core of the camera,” Payne explained. “When you’re behind a camera, you look for people you care about. You look for smiles. You look for that moment that the dog starts chasing his tail. Clips does all of that for you. Turn it on, and it captures the moment … And it gets smarter over time.”

From a privacy perspective, all of the machine learning happens on the device itself (again, like Apple would do). Nothing leaves the device until you decided to share it.

As impressive, to me, is that Google was able to fit such ostensibly powerful machine learning capabilities into such a small device. Payne describes it as a “supercomputer.”

But then that’s Google in a nutshell: The supercomputer in a room full of normal computers.

And while I’m sure that Apple, Amazon, Microsoft, and others will be able to match some parts of what Google is doing here, it’s not clear to me that any of them can ever do it all. In fact, I’m sure they cannot. And that’s why this is all so impressive: Not any single announcement, but rather the weight, the scope, of it all.


Tagged with , , , , , , , ,

Join the discussion!


Don't have a login but want to join the conversation? Become a Thurrott Premium or Basic User to participate

Comments (53)

53 responses to “Deep Dive: The AI Innovations Across Google’s 2017 Devices Lineup”

  1. maethorechannen

    I have my doubts about AI (or at least what's currently being touted as AI) really being the future of computing for one reason - the dismal record of these products getting released beyond the borders of the USA. When they do escape the confines of America it's usually limited to a small set of secondary markets with a third rate set of features. I just don't see how something so geographically restricted can really be considered to be the future of anything.

    Take the Pixel Buds. Translates 40 languages. Available in 6 countries.

  2. VancouverNinja

    Pixel buds - again nothing new here. Microsoft already does real-time translation via Skype; MS’s demo of instant real-time mandarin translation is mind blowing and demoed a few years ago already . Why hasn’t MS embedded this in’s useless unless both parties are wearing them. This is Google glasses all over again - useless.

    Wearable camera that captures life’s precious moments geez that’s original, the twist AI; the result fail. No one is going to feel comfortable with people wearing cameras recording everything, may be good for cops but not society overall.

    Google continues to run at the Windmill with other people’s ideas and their take on them are weak, they lack fundamental need, viability, or acceptability. I see only a “let’s throw as much spaghetti at the wall and see if we can get some to stick” mentality. Google has too much money and not enough disciplined leadership.

    I continue to say it, AI is nascent and it is to early to be declaring a dominate player, especially after goofy product launches like these.

    • Paul Thurrott

      In reply to VancouverNinja:

      Your reaction to Pixel Buds is ... nothing new here? Really?

    • prettyconfusd

      In reply to VancouverNinja:

      As seen in the demo: if the other person doesn't have pixel buds (or a phone capable of translating for them) your phone does all the work. Whatever they say is translated and played back in your ear and whatever you say is played back by your phone so they can understand you. Both people do not need a Pixel or Pixel buds.

      You're right that Microsoft demoed this a couple years ago and it was proper Star Trek level stuff. But what have they done with it since? Literally nothing. It hasn't appeared in Skype, it's not appeared in Windows, Cortana can't do it...

      Even as a dyed in the wool Microsoftie I have to admit Google's take on this is super impressive.

      For a while Google absolutely were just seeing what sticks but the past couple of years have been a revelation for them. Untethered from all the other stuff once Alphabet was formed Google really do seem to have turned a corner and while I wanted Microsoft to be the one to connect the dots it's clear they have simply resigned themselves to just not even attempting to compete, leaving Google with a wide open field.

      And this week they showed just how much they're taking advantage of the situation by leapfrogging Apple, Amazon, and Microsoft. Impressive and slightly terrifying stuff, haha!

  3. Tony Barrett

    As any developer will tell you - it's all in the software. Hardware is pretty much equal now, as was said. A small change here and there, but human hands are only so big. Software, and in this case, AI *is* the future, but to do that, you need your own platform to make it shine. You need a global infrastructure, developers on board, massive cloud back end, brand loyalty and consumers at the front end who will buy into it all. Google have all this in place - more than anyone else, so unless they make a calamitous mistake somewhere (a'la Microsoft - all the time), then they will lead this race to the next generation. Amazon will fight all the way, but they don't have the mobile presence to make Alexa shine. MS have big ideas, but are notoriously bad at executing them, and now don't have a mobile presence either. Apple are missing some key pieces, and keep relying on hardware for all their profits - and there's a problem brewing there.

    • Jorge Garcia

      In reply to ghostrider:

      IMO, you are EXACTLY right. The smart money is on Google, and this is 100% their race to lose. Amazon is going to offer up a very credible alternative, but too many people are going to wise up to their end-game. I don't think what I'm about to describe will ever happen, but I can almost imagine a future where Google keeps making its services so indispensable that one day they decide to use the nuclear option and "pull" their software from Apple products, thereby getting a certain large percentage of people to involuntarily leave their iPhones behind so as to not be left out of the loop. An inconceivable notion in 2017, I know, but at one time it was inconceivable that Atari and SEGA would not be around to challenge Nintendo.

  4. Stooks

    Wow fanboy alert! Did I just read an ad for Google?

    Which has higher sales....Chromebooks, smart watches or home assistants like Google home? I am going to say the home assistants are in last place.

    Let us not forget that Google is a ad company with 80+% of their revenue and profit coming from the sales of target ads. With all their "Cloud" power they are still in 3rd place behind Amazon and Microsoft when it comes cloud hosting.

    I think what Google does in the AI world in 2017 is way better than anyone else but all of them still are gimmicky at best. Everyone I know has a smartphone and has for many year now. None of them, while around me, NEVER users AI. I have tried and all of them are just too slow and not perfect or as perfect as my typing into search or opening contacts/favorites or whatever.

    Will AI be so good to be used by everyone in the future....possibly but I think it is way too early.

    What is probably going to happen sooner that AI being common is some serious government regulation and new privacy laws. It is happening in Europe now. After the Equifax breach I think the US government is finally getting the fact that new regulation/laws need to be seriously considered. People are losing faith in Silicone Valley, with weekly data breaches. Google will be a big fact target of any government regulation and that can only impact their AI/Data gathering.

    • Paul Thurrott

      In reply to Stooks:

      Yeah, it's a 3000 word advertisement. Sigh.

    • rob4jen

      "Which has higher sales....Chromebooks, smart watches or home assistants like Google home? I am going to say the home assistants are in last place."

      I think there is an undercurrent of momentum for assistants that's about to explode, possibly as soon as this Christmas. My wife's supper club was over last week and a group of 40-something women were pretty much in awe of my wife's use of the Google Home. They called me in to do a 15 minute demo and Q&A on it. To a one, they expressed interest in buying one - especially with the low $50 entry price of the Home Mini and the color options (coral, mainly) now available. And these are not at all tech-focused women.

      If my experience is broadly relevant, we could see many more of these things in people's homes soon.

      • Stooks

        In reply to rob4jen:

        We have one, an Echo. We all thought it was great at first. Asking it about the weather, finding all the Easter eggs, connecting it too our Nest devices, my wife even connected her Outlook dot com calendar too it briefly. We have Amazon unlimited music as well so it is a good fit.

        After the initial wow factor wore off people in my house are checking their phones for the weather and only using the Echo for music. Only my youngest son really uses it now and to be honest only when his friends are over so they can "play" with it.

        Of you group of 40 people I doubt more than 10 will actually buy one. Of that how many will really use it to even a minimum of its potential? Too fully use most of these devices you have to be fully in their echo systems. Google home is not going to work great if you use icloud/apple music or outlook dot com/Office 365. Yes if you are all in Google it will work the best.

        People need (mostly) a computer and a smartphone. After that is is degrees of convenience. Things like tablets, smart watches and AI assistants. Tablet sales have dived and probably never will sell like they did in the past. Smartwatches, mostly because of current tech have leveled off. I think you are right that these assistants will be the 2017 holiday electronic gizmo gift like tablets and smartwatches were in the past. I doubt this will be the case in 2018.

  5. RobertJasiek

    Surely AI is important and currently Google is relatively good at it. E.g., AI might help to find images and sparing the user most of the currently necessary time for websearching them. E.g., AI might create images not available in search. Same for other kinds of data.

    However, there is more to AI than just possibly great help. AI involves data protection and, even more important, freedom of will. We must distinguish between offline and online AI services - privacy and integrity of data are or are not easily protected against surveillance and abuse by the companies. Is AI the slave of us or are we the slaves of AI and their offering companies? This fundamental question has the greatest implications for the freedom or peoples and mankind.

    AI used wisely can be a great help: play a game against a super-strong opponent at any time; find / get images immediately instead of having to spend endless time for the search / creation. AI used carelessly can end in mankind's extinction when AI is attached to self-replicating, uncontrollable machines.

    Soon, it will be irrelevant what Google can and does but we, the peoples and mankind must keep control over AI. Innvotions are irrelevant but ethical questions are central to what AI will be to us.

  6. Jules Wombat

    This is rather narrow. Google may dominate the home/ domestic intellignce and hence AI, but in business Microsoft obsolutely owns the means to exploit enterprise level data. Hence Graph, Office and Microsoft Analytics, Linked In etc integration. Microsoft AI services will succeed in business and enterprose applicaitons, and that revenue could be as great, or greater than domestic/ consumer use.

    BTW What has Mickey Mouse done in the last forty years ?

    • Tony Barrett

      In reply to Jules_Wombat:

      MS are going to double-down on the enterprise. It's what they know, and with things like software assurance, it's money for nothing for them. The problem is, the enterprise isn't going to lead the AI revolution, in which case MS will never, ever lead the pack again. Doing well in Enterprise is nice, but it's not a good footing for the next 30 years.

      I can see Microsoft's overall market share continuing to shrink as Windows falls away and more and more people use only mobile. MS are throwing everything they can think of at Windows 10 to keep it interesting, but it doesn't get away from the fact consumers will never look at Windows like they look at their iOS/Android phone.r

  7. timothyhuber

    Thanks Paul. This type of coverage and analysis is what makes me happy to subscribe to your site. I'm very interested to see how this plays out.

    We have a couple Google Home devices: one in the kitchen and one in my office. My wife has been very reticent to use them at all. When I demonstrated using it to make a phone call she insisted she would never use it. It wasn't until I had her use it to find her phone last week that she showed some interest. I caught her using it as timer a couple days later. The more natural the interactions with these devices, the easier they will be for people to engage with.

    • wolters

      In reply to timothyhuber:

      I've considered converting from Echo to Google Home. My main hangup with Google Home is voice commands for music. Amazon Echo is so much better at it. On the echo, I can say "Shuffle My Playlist R.E.M." and it will do it...Google home gets confused. If I tell the echo to Shuffle all of my music, it does whereas Google home will not. I do hope they improve this.

      • timothyhuber

        In reply to wolters:

        Yes, using voice to start music playing is a bit of a weak point. It gets confused very easily and I usually end up using my phone to cast.

        I'm using Chromecast Audio devices throughout the house and those are fantastic. To gain those features, I'm willing to overlook the voice issues while they figure it out.

        I'm also running into challenges with multiple users and third party smart home devices, like Hue lighting. Things work fine on the Home, but only my phone, not my wife's.  Granted she wouldn't use it as much but it's the little things that require workarounds that stop her from fully embracing things.

  8. skane2600

    The bar for what is considered "AI" today seems to have dropped below ground.

  9. Payton

    All of this sounds pretty much like those of us who grew up on Star Trek think that computers should operate. The problem for me is that the way this is implemented--other than a couple of places where "for privacy" things happen locally, seems to allow Google to build an incredibly rich, detailed and intimate picture of our lives, families, habits, likes, etc., all stored on their servers and theirs to do what they will with. Just where did those millions of pictures used for training their AI come from anyway? Google is, first and foremost, not a search company or an AI company, but an advertising company and this information is their product. No telling where they will go with it or what they might do with it in the future. Let alone who else might gain unauthorized access to it.

    Don't get me wrong, I want my computers to be able to do all this stuff. I just want it to be MY computers that know me so well, not Google's.

  10. nbplopes

    The article is mostly a reiteration if Google’s Presentation. The tone ... everything. Hummm.

    I was expecting a real world experience on this.

    The thing I liked the most from a technical perspective is voice differentiation. A key capability to enable voice interfaces on shared devices.

    As for the rest of the AI stuff ... need to check if this stuff work well enough. My experience is that if wrong answers or non responsive behavior occurs regularly ... I stop using it. It’s like a mouse regularly getting stuck or a keyboard that ...

    Lovely. lovely.

    • Paul Thurrott

      In reply to nbplopes:

      This is what they announced. I'm looking forward to using it and seeing how it works in real life.

      • nbplopes

        In reply to paul-thurrott:

        Looking forward to read about your experience.

        Your are spot on in clearly identifying were Google is heading in terms of investment ... AI, AI and more AI. Its believable because considering the company history, they seam to have a clear vision deep from the gut. It is not just about opportunism, they will really invest loads of money, loads, loads until they get it working really well. Unlike some other company I know that clearly approaches things with an opportunistic mindset ... the result is far less satisfactory ... should I say ... frustrating?

  11. Michael Rivers

    I wonder how many of the Google Assistant things (aside from the squeeze to activate) will make it to my old Pixel XL 1. It would be kind of crappy if that stuff is only available on the new hardware, or if they make us wait a long time for it.

  12. Todd Northrop

    Perhaps the worst thing about Google's assistant seems like something stupid -- you have to say "OK Google". It is the most unnatural, awkward way to start a conversation with a computer. You are basically speaking to a company, not an assistant. (Imagine saying "OK Microsoft" to invoke Cortana.)

    You can't even say "OK Google" without the words stumbling out of your mouth half the time.

    There may be some who respond with typical hipster snark to this, but it's a real problem.

  13. tbsteph

    Looks like Google has found an Assistant.

  14. chrisrut

    Paul, read this with near joy... Been waiting for decades ? See "Forbidden Planet."

    Its what i expected from Microsoft... Still hard to believe every big player hasn't figured out this confluence of software, hardware, and AI... But, anyway, wow...

  15. chrisrut

    " An assistant can only truly be useful if it knows who you are,”. Exactly. And I expected this from MS. It is the key to authentication beyond our obsolete password based technologies. It could be the "big bang" that leverages Google into the enterprise.

    • wright_is

      In reply to chrisrut:

      It can't replace passwords, because it is identity, not security. Voice, fingerprint, face etc. are identity, if they can be forged, they are useless as password replacements, because you cannot change them! They should be seen as purely username replacements.

  16. mike moller

    Paul - can you please do something to control those damned webinar ads from VeeAm that keep overlaying every page of your stories.

    It's absolutely maddening and its intensity really borders on spam.

    I even went to the trouble of registering for their webinar yesterday in the hope that might suppress this persistent annoyance. Guess what? It's done it on three more pages today!

    I'm reaching the point of giving up on

  17. Hifihedgehog

    Much if not all of of the examples of AI features Thurott posted are not exclusive to Google in the slightest. Yet he prefaces his article as if they were. That sounds very Apple-like, I am afraid. In this regard, his posts are becoming more and more clickbait-ish in nature. The sad thing is he has come to actually believe in his extreme claims. Like they say, pride comes before the fall.

    • Paul Thurrott

      In reply to Hifihedgehog:

      I call out the handful of things that are Apple-like in that they use on-machine AI. But as noted, Google is one of only a few firms that has both deep-seated AI expertise in the cloud and the dominant devices play. Some other firms only have pieces of that.

      This is an assessment, not click-bait. There is no such thing as a 3000 word clickbait article. Sorry if that's too obvious.

    • Stooks

      In reply to Hifihedgehog:

      "his posts are becoming more and more clickbait-ish in nature"

      Indeed and his love of all things Google is getting old fast. I don't come here for Google coverage as there are much better places for that.

  18. mortarm

    >...include children so that it can understand them too.

    Hopefully, there will be provisions for restricting voice commands.

    >...voice-free calling

    Great for mutes, I suppose.

    >“Aunty Suzie is at the front door.”

    "Aunty Jin" would've been a nice nod to Wings, "Let'em In".?

    >...compelling bokeh effect.

    I assume you're referring to depth-of-field, as well.

Leave a Reply