Google Pixel 9 is at the Nexus of Hardware, Software, and AI (Premium)

Gemini says hi, on the Pixel 9 Pro XL

When it comes to the Pixel 9 series, it’s a bit cheap to write something like, “It’s all about the software” or even, “It’s all about the AI,” even though there’s plenty–perhaps too much–to discuss there. But these bon mots are even more pointless than usual because they assume either that the hardware is so lackluster that we need to focus on something else, or that the hardware is so good we can simply move on to the software and AI sides of the discussion.

Pixel makes doing either … complicated. But ultimately, it’s about all three. Hardware, software, and AI.

Hardware

If we’re being honest, 15 years after Google started making its own smartphones, the only certainty is uncertainty. There have been extreme highs and lows. If we limit this discussion only to the Tensor era–meaning Pixel 6 series and newer–the highs and lows are a bit less extreme, thankfully, indicating that overall quality is up. But there have been issues. I don’t go into a new iPhone worried about this kind of thing, but I’ve been bitten by Pixel too many times to not be a bit apprehensive.

The first two Pixel Pro models of the Tensor era that have now been replaced by the Pixel 9 Pro XL were compelling but flawed, with lower pricing than their Apple and Samsung flagship competitors. More specifically, the Pixel 6 Pro had an unreliable in-display fingerprint reader paired with useless facial recognition, some display issues, oxymoronically slow fast charging, and other issues. The Pixel 7 Pro fixed some, but not all, of those issues, and it’s retained the same stellar $899 starting price while conversely retaining the ridiculous curved display edges and too-slow fast charging. It wasn’t until the Pixel 8 Pro that Google got it right, for the first time delivering the complete package, with faster charging, reliable and secure facial and fingerprint recognition, faster fast charging, and other improvements. The starting price had gone up by $100 to $999, but the only notable downside was its lackluster battery life.

The Pixel 9 Pro XL raises prices yet again, to $1099, and I assume Google tries to justify that by pointing out that it now sells a smaller Pixel 9 Pro that retains the $999 starting price. But the pricing advantage is gone, and that makes for a more perilous buying decision. The iPhone 15 Pro and Pro Max start at the same $999/1099 price points, and while Samsung Galaxy S24 flagship pricing is higher, they’re always on sale and come with more base storage, and Samsung offers terrific trade-in values. Whatever the math, Google is now competing head-to-head with the market leaders, and no matter how good the Pixel 9 series is, I can’t say it’s earned this pricing structure. It still feels like a risky bet to ask the typical consumer to make.

My Google case arrived today

On paper. But as I observed last night while installing apps and configuring the Pixel 9 Pro XL, Google has achieved something interesting here. The device is overtly premium, and even more overtly iPhone Pro-like, with its flat glossy metal sides and their subtly curved edges, the high-quality matte back, and terrific looking camera bump that’s as functional as it is attractive. Even the antenna-friendly gaps and cutouts on the edges look like an iPhone Pro. From almost any angle–other than directly from the front, where the small cutout camera hole betrays its heritage, or directly from the back, where the iconic, centered camera bar does the same–this looks and feels exactly like an iPhone Pro.

Or at least it did until I put the case on it.

This is uncanny, and I have to go back to the original Pixels with their me-too, iPhone-like designs to find a Google phone that so closely mimics what is clearly its biggest competitor and inspiration. I realize there are only so many ways to create a slab of glass and metal, and that similar devices are going to look similar. But this isn’t just similar, it’s almost exactly the same. So I can only conclude that it’s deliberate. And that I couldn’t care less. Just as I wanted the Surface Laptop 7 to be “a MacBook Air, but running Windows 11,” it’s reasonable to believe there are Android fans, probably many of them, who are envious of Apple’s hardware but want nothing to do with iOS or Apple’s walled garden. So this makes sense: Pixel has always played the same role in phones as Surface does in the PC market.

Software

And it’s not just the hardware: That’s true of the software, too. In the same way that Surface provides that pure Windows 11 experience, unadorned by crapware and extraneous apps that PC makers need to survive, Pixel provides, if not a “pure” Android experience, at least an optimized Android experience that delivers “the best of Google” in ways that its partners will not. You may or may not like Samsung devices, for example, but even its biggest fans will immediately see the unfortunate side effects of how that company’s strategic aims diverge from those of Google. That relationship is messy.

In any event, yes, at some point, we must discuss the software. And that software, in 2024, includes a growing selection of AI functionality, spread throughout the system more thoroughly than what Microsoft has done with Copilot+ PCs, but no less haphazardly. But there’s a big difference between Android and its competitors–Windows or iOS–and these additions don’t represent a sea change, as they do in Windows 11 and will soon in iOS 18.1 and beyond. What some people are forgetting, based on the reviews I’ve seen so far, is that Google has been doing this all along. That is, AI has been central to the Pixel experience from Day One.

It wasn’t always marketed that way, of course. But Google was performing its computational photography magic from the beginning. (Before then, really. Just look at the photo quality we got from the Nexus 6P a year earlier.) That first Pixel was hindered by performance problems that Google later solved, and then later still no longer needed, with custom camera hardware that perhaps represented the company’s first-ever use of on-device AI, as we would call it today.

The Pixel 2 XL delivered a superior camera experience, yet again, but was undermined by unrelated display issues. The big advance there was Google’s first foray into on-device AI with its Pixel Visual Core image processor: By offloading image processing from the CPU and GPU, Google could improve performance and efficiency and performance machine learning (ML) functions–what we’d now just call AI, for better or worse–even when offline.

Backed by this advance, the Pixel 3 XL forged ahead with a single rear camera a year later, again delivering a terrific experience thanks to computational photography. (Oddly, that phone featured two front-facing cameras. Google has always been quirky, and sometimes where it chooses to focus feels a bit off-center.) Unfortunately, Apple and Samsung were starting to catch up by this point with their camera features. The Pixel’s low-light performance was still superior, but Apple had better panorama features and Samsung offered a solid (for the day) portrait mode with software-based bokeh.

Not coincidentally, this is when things got interesting. The following year, Google released its first-ever A-series phones, which were less expensive but still delivered on “the best of Google” promise by overcoming the limitations of the hardware via software, much of which we’d call AI-powered today. The Pixel 3a XL remains one of my favorite Pixels of all time, a high among the highs, if not the apex in terms of value.

The Pixel 4 XL was a curious misstep, and another example of the firm’s sometimes skewed focus. But the most compelling thing about this device, as always, was the camera. And the launch of this device marked the first time that Google so overtly touted its computational photography expertise, with Mark Levoy delivering what I called “a master class in which an expert at the height of his field descends from Mount Olympus to educate the masses and, in this case, explain why what Google was then doing was best-in-market.”

AI

The Pixel 4 XL wasn’t just notable for photography. With this release, Google really pushed how it was using AI broadly throughout its phones. It included a Pixel Neural Core chipset, replacing the Pixel Visual Core chip in previous handsets with a more general ML and AI-based accelerator that went beyond imaging but could still work offline, a kind of predecessor to today’s NPUs. It was used specifically for new audio features in Google Assistant and, new to this product, the Recorder and Live Transcribe apps. Notably, these features were sold much like local, on-device AI via an NPU is sold today.

By the time the Pixel 5 arrived in 2020, Google’s AI aspirations had run into the hard, cold wall of reality. This phone launched with Call Screen and Hold for Me, interesting AI-based features related to the phone’s, um, phone functionality. And expanding its silicon creation capabilities–it also made its own TPM-like Titan security chip by this point–Google was designing an AI-forward CPU for its phones, but was running into problems. Using mid-tier Qualcomm processors, as it had for its Pixel 4a and Pixel 4a 4G low- and mid-range handsets, was one thing. But using one in an alleged flagship didn’t sit well with customers, nor did its continued use of the same basic phone design. (I ended up sitting out that generation.)

Still, the software was interesting. Google had kept expanding into more “helpful” phone features–Personal Safety with car crash detection, automatic song detection, and more–that set its phones apart from the competition. And many of these used on-device and cloud-based AI. The Pixel 5a continued down the same path as its predecessors–same tired design, same tired camera lens, same mid-tier processor, but with a low price–but by this point, the list of helpful features Google had added to Pixel had grown extensively. And it was augmenting Pixel each quarter with new Feature Drops that added even more new features. Many of which were AI-based.

The first had arrived in 2019, for the Pixel 4 series. But the second Feature Drop added better software-based color pop and depth effects in photos, and the third added adaptive battery improvements. But then it escalated: A December 2020 Feature Drop brought Hold For Me to more handsets, added AI-based image editing capabilities to Photos, introduced Adaptive Sound and Adaptive Connectivity, and improved Adaptive Battery. Pixel users got Smart Compose in messaging apps in March 2021, and then Astrophotography videos that June.

Tensor brings it all together

And then the Pixel 6 series happened, with the belated introduction of the Google Tensor processor, with its integrated CPU, GPU, and NPU capabilities, and a long-awaited perfect storm of hardware, software, and AI. Android, suddenly, had gotten a lot better. And Pixel Android was better still, with “ungodly accurate” Google Assistant voice typing, improved spam and call screening capabilities, Direct My Call, Call Decline, real-time Live Translate throughout the phone, and a lot more.

Tensor was interesting–and still is–mostly because Google had specifically optimized the chips for AI and not for general performance. Day-to-day performance was fine–and still is–but not to the level found on flagship Qualcomm Snapdragon chips, and there are still questions about whether this approach was the right one. But Google hasn’t backed down from this path, and its released a new Tensor chip generation with each passing Pixel generation.

The Pixel 8 Pro arrived in late 2023 with the Tensor G3 and a host of new AI features, including Web page summaries, Call Screen improvements, and more. But the big news was that it would be the first smartphone to ship with an on-device small language model (SLM), called Gemini Nano. This would enable the phone’s NPU to work across a local, offline-capable AI model, instead of requiring a cloud model with the inherent latency and availability limitations. And it powered new features like Magic Compose in Messages, Summarize in the Recorder app, and Smart Reply in Gboard. What started with a custom chip evolved into an improved custom chip, an NPU, and now an NPU with an on-device SLM.

Google had initially restricted Gemini Nano and those new features to the 8 Pro because the non-Pro Pixel 8 didn’t have enough RAM to provide a good experience. It eventually relented. But with the Pixel 9 series, Gemini Nano would evolve to be multi-modal, supporting images, sounds, and spoken languages, and not just text. And so all the Pixel 9 series models ship with much more RAM than before to ensure each can use this new functionality.

And the resulting improvements are particularly interesting, with the list of on-device AI capabilities now growing rapidly as well. The Pixel 9 series supports Add Me and Super Res Zoom Video in the Camera app, Pixel Studio for image generation, a Pixel Weather app with AI forecasts, Pixel Screenshots for finding information you’ve squirreled away in screen captures, and more (plus Gemini Live, which uses the cloud). It’s reasonable to expect coming Feature Drops to expand on these capabilities, and bring them to more devices.

On the other side of the fence, Apple is of course prepping Apple Intelligence in its usual slow-boil way. This approach has its advantages, indeed, I respect the way Apple is handling it, given how chaotic AI is with Microsoft, OpenAI, and Google. But AI will remain an inherent advantage for Google, Android, and Pixel: Google has been at this for so long, has integrated so much AI into its ecosystem, that it can no longer be viewed as individual features, but rather just part of the fabric. There are disadvantages, yes, and Google’s approach is perhaps too aggressive. But that’s the choice we have before us: Full-featured and a bit chaotic with Google, or measured and limited with Apple.

Which, when you think about it, parallels the basic value proposition of each ecosystem. The more things change, the more they remain the same. You can live in the wild west or in the walled garden, your choice

Gain unlimited access to Premium articles.

With technology shaping our everyday lives, how could we not dig deeper?

Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.

Tagged with

Share post

Thurrott