As Google Pushes the Final Pixel 4 Update, a Look Back at What Went Wrong (Premium)

I was transfixed by Google’s Pixel 4 launch presentation, which featured an incredible lecture on smartphone photography by distinguished engineer Marc Levoy. Unfortunately, this event is where it all went south for Pixel, both as a brand and technologically.

How instrumental was Levoy in making Pixel the best smartphone platform for photography? Well, he led the Google Research teams that created computational photography wins like HDR+, Portrait Mode (which somehow created background blur with a single lens), and Night Sight, among other accomplishments. These are technologies that competitors later raced to duplicate, often taking years to rise to the quality that Pixel had previously achieved.

But the problem for Levoy---and for Pixel---is that he made one major blunder that set the platform back at a crucial moment. It’s reminiscent to me of a Steve Jobs decision in the very early 2000s to choose DVD over CD-R for the iMac at a time when the world wanted to create mix CDs: he choose telephoto over ultra-wide for the Pixel 4’s second lens.

Both decisions are classic examples of bad timing, but Apple, at least, scrambled to fix its mistake. Google? Not so much. In fact, the mistake set back Google so badly that the next generation and a half of Pixel handsets---including the Pixel 4a, Pixel 5, and Pixel 5a with 5G---featured no flagship-class products and didn’t push the needle on photography at all, giving Apple and Samsung the time they needed to push their own flagship smartphones into the lead, photographically. Pixel has never recovered.

To be fair, Pixel had more than its fair share is issues---hardware and software alike---before Levoy’s crucial mistake. But photography was the one thing that kept beleaguered Pixel fans---including me---coming back again and again for the otherwise buggy products.

Google’s computational photography prowess was described as the platform’s “secret sauce” at the Made by Google ’19 launch event at which the Pixel 4 debuted. And Levoy’s appearance at this event was incredible, a master class in which an expert at the height of his field descends from Mount Olympus to educate the masses and, in this case, explain why what Google was then doing was best-in-market.

The key to that success, of course, was computational photography, the “software defined camera,” as Levoy called it. And this success pointed very clearly to how Google’s advances in AI and other computer sciences reaped real-world benefits for its customers. Benefits that, in this case, they could see with their own eyes. The first three generations of Pixel smartphones, for example, handily outperformed its competition, despite Pixel sticking with a single lens---for the most part; the Pixel 3 XL actually had two front-facing lenses for some reason---while others started adding additional lenses. Google’s software was better than its competitor’s otherwise superior hardware.

There’s...

Gain unlimited access to Premium articles.

With technology shaping our everyday lives, how could we not dig deeper?

Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.

Tagged with

Share post

Please check our Community Guidelines before commenting

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC