Ask Paul: August 11 (Premium)

Welcome to Macungie, PA!
Welcome to Macungie, PA!

Happy Friday! Here’s an incredible set of reader questions, many tied to important debates, to get the weekend started a bit early.

Theories about the future

AnOldAmigaUser asks:

On FRD this morning you were discussing Brad’s daughter having to use a Chromebook. You mentioned that kids growing up with Google apps would expect to use them at work, and that the Office Web apps would not really be equivalent in their eyes.

Yes.

I would argue that what young people expect, and what they will get in the corporate world are two different things. Any company that has been using Office for 40 years or so, is not going to change. I would also say that for the vast majority of users, Google Workspace apps and Office Online apps are equivalent because the vast majority of users do not take advantage of the bits at the bleeding edge where one or the other will have advantages.

I’m kind of surprised by how much debate there is around this, but of course we all have our own theories about what will happen in the future, and these theories are based on our own experiences. I’ve expressed this and related sentiments many times, and in my timeframe, I witnessed some fascinating transitions in IT, among them the advent of personal devices (especially the iPod at first) with USB connections (which triggered old-school IT pros to literally superglue USB ports shut), the resulting “consumerization of IT” movement, the transition from heavy-touch, on-prem PC and device management solutions to light-touch and then fully cloud-based Mobile Device Management (MDM) solutions like Intune, the Bring Your Own Device (BYOD) movement that combines personal and work data on a single device with selective remote wipe capabilities, the rise of Google Workspace (Docs, etc.) and ad-hoc apps as inexpensive and lightly managed solutions (Notion, Slack, etc.) to counter what’s now called Microsoft 365, the Work From Home (WFH) push during the pandemic, the resulting hybrid work scenarios we now enjoy/endure, and so on. And so every time someone says to me, Paul, you don’t get it, IT will never allow that (whatever “that” is), I almost have to laugh because IT has been allowing more and more for over two decades. And my belief, my theory, based on this experience, is that that will never change. That is, IT is all about change. It has to be, because technology is all about change. All they can do is manage it as best they can but also adapt to the times.

With regard to what young people use and expect, and what may or may not happen to them in the workforce of the future, consider Microsoft Teams. One might argue that today’s Fortune 500 is so wrapped up in Microsoft 365 that they would never even consider using Google Workspace or those many other ad hoc app/service solutions. But then Slack was such a big deal that Microsoft had to create Microsoft Teams, which would never have happened otherwise because chat-based collaboration—wait for it—is a technology that young people expected and required of their workplaces. In startups, new, and small companies, there is no precious Microsoft solution that they require or care about, and as those companies grow, they may stick with what they know, they may migrate to Microsoft 365, I guess. We’ll see and it will be company specific. Again, we all have theories.

But anyone arguing that Microsoft is simply too entrenched in the enterprise productivity market is correct today, but … things change. That’s my point. And Microsoft dodged a bullet with Teams, sure, but it may have also screwed the pooch with Teams because what that thing is today is a classic Microsoft over-thought platform that’s huge and complicated. It’s exactly what these smaller companies do not want, and while it technically meets one need of that younger generation (chat-based collaboration), the app as it stands today is nothing like the thin, light, and express-purpose apps that young people expect because they grew up in the mobile age. So we’ll see. Those young people will soon be decision-makers. And it’s not radical for them to suggest that they can save money and simplify things by looking elsewhere. Those around them of the same age will likely agree, that Microsoft, the new IBM, is yesterday’s world, and that they need something more modern. Again, it’s my theory, and based on my experiences.

But all that ignores my question, which is, with all the “features” being added to Edge, in an effort to make it a platform, do you think that Microsoft might be thinking of offering a “Windows 12X”? I can’t help but think that they would love to have a device where Edge and Bing can’t be overridden and all the default (and only) settings are the ones Microsoft would like customers to choose. I also have to believe that they would love to have an option to sell hardware and software to the K-12 market, and without a “Chromebook-simple” system, they cannot.

As an outside observer, I’ve always been fascinated by how Microsoft reacts to things happening with its customers and competitors, and to things happening in the industry generally, especially when they happen without Microsoft’s involvement. (My book Windows Everywhere can be viewed through this lens, as Windows as a product is an ongoing series of reactions to such things, and examples where Microsoft led the way somehow are rare and mostly historical by this point.) And the rise of simple, mobile-oriented competing platforms like ChromeOS falls into this category. Google, a provider of some of those low-cost, ad-hoc, and mobile- (and web-) based solutions I mention above, sees the world through that lens, because of course it does. And so ChromeOS is something that could have only come from that company.

Microsoft’s reaction, not just to ChromeOS, but to mobile platforms like iOS (which begat iPadOS and device-based tablets that can now work like simple laptops) and Android, has been an ongoing series of freak-outs and bad decisions. The Sinofsky regime was so freaked out by the first iPad that it mauled the Windows user interface with unnecessary touch-first user interfaces that made no sense on the billion-plus desktop and laptop PCs out in the world at the time. The push to Arm-based processors (well, the two pushes, in Windows 8 and then 10) was more rational and a good hedge-betting move, but it has not panned out yet. (“Wait ’til next year” is the ongoing refrain.) The whole S-mode debacle was stupid because it didn’t respect the need for exceptions, where some Win32 apps are/were so critical that Microsoft needed to let users and IT “allow-list” them and then lock down the system, which is such an obvious need I don’t even know where to start. And Windows 10X, which would have used containers to segregate the old from the new is a great idea theoretically, but it was apparently impossible at the time and abandoned. By the time you get to Windows 11, you see a complete capitulation: Microsoft can’t make Windows work well as a device-like platform, so it will just make it look prettier and sort of device-like. I like it, but this is just lipstick on a pig.

Right in the middle of all this churn, Brad and I sat in the audience of a Microsoft education event (the one where they introduced Surface Laptop, a PC far too expensive for the education market) and Terry Myerson showed off its answer to the simplicity of ChromeOS management: USB keys. Teachers were expected to walk from PC to PC after each class, plug in these USB keys, and reset the PCs for the next class. All day. Every day. Basically manual labor. That this didn’t meet what ChromeOS was capable of at the time was obvious. That it was a non-starter was also obvious. It was … embarrassing, frankly.

Previous to all this, Microsoft reacted to Linux on netbooks by resuscitating Windows XP (as Starter Edition) because its then-current release, Vista, was too resource intensive to run well on those low-end PCs. That was a radical, all-hands-on-deck response. But Microsoft’s more recent response to the device-first, mobile-first movement has been a series of incoherent and out-of-touch changes that rarely made sense. This may be a leadership issue. It may be a times change kind of thing. I don’t know. But let’s think about this rationally.

On the one hand, ChromeOS meets a real need. It gives cash-strapped educational institutions a cost-effect technology solution that really works, is easy to use, and is easy to manage. Microsoft has not once met those needs.

On the other, ChromeOS has limited reach compared to the broader PC and devices markets. It has not ever broken out of its tiny marketshare and maybe never will, despite efforts by Google and various hardware makers to enhance capabilities (Android apps, for example) and sell premium, expensive Chromebooks.

So the question here is what the low-term impact is. Will those who grow up on Google products and services expect that stuff in the workplace later? Or will they have such a bad/limited experience with their schools’ crappy Chromebooks that they will actively seek to never have to use such a thing again? There’s an argument to be made for each. Maybe both are correct. I don’t know.

But that leads to whether Microsoft needs a Windows whatever-X that is thin and light and maybe containerized, or an EdgeBook-type competitor for Chromebooks. And … we can debate that. I really don’t have the answer. Just that body of experience, which, in this case, tells me that nothing they’ve done so far has worked, but Chromebooks have likewise never reached mainstream success. Maybe this was much ado about nothing. But maybe not.

Xbox strategy mistakes and you

madthinus asks:

You and Brad speculated that the queues on Cloud gaming is due to the GPU’s being used for AI. Considering these are Xbox Consoles running I think it is safe to say that it is not the case, but it got me thinking: During the FTC trial / discovery, information was surfaced that Microsoft prioritized Cloud blades above Series X consoles. With Sony declaring no supply shortages and also going on a discount rampage in the US and UK, it appears that Microsoft has some supply constraints again. Their earlier pandemic move to secure more chips clearly helped them stock their data centers, but now they have insufficient supply to support the demand for consoles and their own services. Is this another little Xbox storm just as their biggest title is about to drop?

It’s pretty clear that Xbox Cloud Gaming has not panned out the way Microsoft hoped. I feel like this was predictable, that the bandwidth and lag/latency issues that dog all cloud-streaming providers would of course impact Microsoft too, and I don’t see this going away: someone with a relatively low-bandwidth connection can download a purchased or Game Pass-based game and then play it normally. But that person will never stream games effectively. As such, Cloud Gaming is just a perk of one of the four versions of Game Pass. It’s a bullet point on a features list, and something that subscribers use to sample games but not play them.

This was a strategic mistake on Microsoft’s part … unless, of course, it wasn’t. Among my many theories is this notion that the future of Xbox is software and services, and not hardware, and that while it’s tough for fans to see Microsoft losing the console wars and apparently not even trying anymore, this may be the right or even best outcome for the platform.

And while Microsoft won’t say that explicitly—it would tank the current ecosystem and probably cause many fans to flee for good—its protracted battle to acquire Activision Blizzard at such great expense speaks very much to my theory, as it sets up Microsoft as a tier-one game publisher across all of the major gaming markets—console, PC, and mobile—and completes that vision of meeting customers (gamers, in this case) where they are. So instead of only or mostly catering to Xbox console owners, now they will cater to gamers more generally. And as/if the revenues increase, keeping that struggling console family alive will likely seem less pressing. Or necessary.

As with all things, we’ll see. But Microsoft has never been successful in selling consoles. I think it’s fair to say they never will be.

Platform ideology

jrzoomer asks:

Hi Paul, when it comes to iOS and Android, there are many parallels with Mac and Windows. Like Windows, Android is the larger user base, with a large 3rd party ecosystem, more customizable, less proprietary, and in general with more affordable options. I know that these are features that attract you to Windows as opposed to the Mac, but based on your writing, why do you not feel similarly as strongly when discussing iOS and Android (or correct me if I’m wrong)?

Generally speaking, I’ve always preferred the open Android model to Apple’s closed garden approach, and for the same reasons that I prefer Windows. Choice is good, etc. But there are some major differences between PCs/Macs and mobile devices that make this a bit more nuanced. For example, Apple never restricted how Mac users got apps, and now cannot, so that platform is every bit as open as Windows from a software perspective, though of course the hardware is now Apple-only. (Fortunately, Apple makes terrific hardware.) And Linux is there for anyone who wants the ultimate in freedom and can handle the technical difficulties.

The problem with mobile is that Apple, having lost the PC war, did something that I don’t think any other company would have done in a similar position: instead of making it an open system like the PC and Mac, which might normally have the effect of improving app availability, it locked it down. And because its devices were so good and so popular, this strategy, which flies in the face of logic and inevitability, worked. It f#$%ing worked. And so Google, that blatant copier of ideas, simply copied that app model in Android. And here we are, stuck in a duopoly of overly expensive and arbitrarily-invented app store fees.

Regardless of my personal issues with this world, we have the two choices. One, from Apple, that tends to be more restrictive in every way (personalization, etc.), and one, from Google, that tends to offer more choice, both in hardware and in customization/personalization. And different types of people flock to one or the other. Apple fans will rightly claim that Apple’s insular ecosystem has cross-device integration benefits while critics will just as correctly point out that these benefits come at the expense of choice and lock-in. Android fans like the choice of hardware, with a variety of devices at different price points, while Apple fans will point out that they only get the best of the best anyway, so who cares? This is all very subjective. There are pros and cons to each. At least there is some choice.

For me, I go back and forth. It’s not random, I guess, but rather based on whatever experiences I’m having at the moment, or what new releases bring to the table that I might be interested in. For example, I value photography above all else in mobile, but that’s not enough to get me to use Samsung flagships because I hate almost everything else about that whole experience. I love Pixel’s clean UX and the camera capabilities are always terrific, but I’ve also had reliability and performance issues that undercut the good stuff. Apple has made some meaningful camera improvements, and there is a certain minimalism to the whole iPhone experience that I like. But there is also the whole Apple thing there that I very much do not like. Nothing is exactly right for my needs. So I vacillate.

Thinking about this now, I’m wondering if maybe this dynamic is not that unusual. That is, it’s impossible to “like” companies like Apple or Google if you know anything about them. This is true across Big Tech. (Just look at Twitter now. Gross.) But you have to look the other way because they are the only options (in this case) and so you evaluate them based on both ideology (and maybe emotion) but have to be pragmatic too. We all need a smartphone. Which things are really the most important to you? And which bad things can you overlook? This is very subjective, and everyone will do the math (or whatever) and arrive at different conclusions. My issue is that nothing adds up and so I try different things. I’ve never hit on something that is just perfect (for these use cases) as is Windows. And no one platform has been so bad that I can just ignore it. So round and round we go.

(And before anyone says that Windows Phone was that perfect thing, it really wasn’t. Windows Phone had tremendous improvements over iPhone and Android in the beginning, but many of its best ideas never amounted to anything and app availability was always the Achilles Heel. Over time, its disadvantages became too much to overcome.)

Faster is

wright_is asks:

With all the advances in technology, why aren’t computers faster than 30 years ago?

Because they don’t have to be: the software we use targets modern PCs and there is no need anymore for performance optimization in anything other than games. Back in the day, when hardware resources were expensive and we often had to make do with less, software makers were forced to optimize in ways that are no longer needed. I wrote recently about the Atari VCS, for example, and that sad little system is still getting new games today which are incredible. But the 8086-, 80286-, etc. world of the past came and went quickly, as did subsequent generations and now it’s all just evolutionary. It really doesn’t have to be better, which is why modern platforms are more focused on things like battery life and power-per-watt.

My wife owned an 80286-based IBM PS/1 computer that I reference from time to time. This thing ran MS-DOS and apps like WordPerfect 5. 1 wonderfully. But I borrowed a copy of Windows 3 from a friend, installed it on her PS/1, and couldn’t believe what I experienced: it was incredibly slow, and far worse than I think anyone would believe. When I clicked a menu in an app, for example, the outer outline of it would first draw, very slowly. Then the little lines between some of the menu items would draw, again, slowly. And then the actual words of each menu item would slowly draw from top-to-bottom. Some menus would literally take a minute or so to fully render. It was literally unusable.

In that world, Windows was ahead of the mainstream hardware capabilities of the day. You needed a high-end 80386-based PC with more RAM to run the system effectively. But today, even the lowliest computers we have, including terrible educational PCs, run Windows better than that. We can complain about the performance because of our expectations, but that’s because our expectations are based on many generations of powerful chipsets and PCs. Hardware is not the bottleneck it used to be.

OK, they are “faster”, in terms or clock speed and number of operations, but they aren’t actually faster in terms of user experience, the user experience hasn’t really changed that much, but a P90 running Windows 95 and Word would boot up in a fraction of the time and Word would start in under a second.

Yep. But they would also be insecure and instantly hacked the second they were put online. They don’t support all the hardware advances that have come since, like USB 2+ and Thunderbolt, high-DPI displays, and so on. They are of their era.

Today, the operating system and applications have exploded so much in volume, but not in outward functionality, that even though the hardware is theoretically faster, actually performing the same tasks as 30 years ago – writing a document, for example, is actually slower.

Hm. I’m not sure about that. But I do agree that Word 95 would technically meet most of my needs today and, yes, would be faster than whatever latest version of Word I’m using now. But I don’t “wait” on Word, and I’ve never wondered why it starts slowly (as it doesn’t) and it can keep up with my typing (something that was an issue in the distant past, likely due to lackluster hardware I owned).

Maybe the better way to think of this is that we have mainstream PCs today that can handle more advanced tasks than were possible years ago, like video editing, 3D gaming, and so on. Our range of options has increased and become more sophisticated.

I am currently dealing with a problem on a terminal server, the Start Menu doesn’t work. It is a known problem and you need to remove the offending registry entries under \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\RestrictedServices\AppIso\FirewallRules – you need to remove around 40,000 from nearly 300,000 entries! Where the “incorrect” entries came from, I have no idea. But it takes RegEdit around 2.5 hours to load the key, another 1.5 hours to delete the keys and another hour or so to load the “good” entries back in – I managed to export the key from a working server. But 300,000 entries! And why should firewall rules affect the Start Menu?!?! The export file is around 307MB in size!

Well. 🙂 This is a different kind of performance problem, in a way. We’ll never know what impact Microsoft’s move from text-based INI files to a database-like Registry had (and maybe continues to have), but this was an almost ideological decision (callback time) by an NT team that was trying very much to be the anti-UNIX, and while I agree with a lot they did, they might have been blind to some key ways in which UNIX got it right. It’s yet another thing to debate, I guess.

Maybe you need a quantum computer! 😉

It’s lost its Edge

sabertooth920 asks:

Has Edge failed to deliver on its earlier potential.  Granted, Microsoft made a switch to Chromium, but, today’s Edge is a far cry from what Spartan was supposed to be.  It’s not a bad browser, per se, but, what (if any) are the compelling reasons to use Edge over any other browser?

The name Spartan is interesting. I’m sure the Edge team promoted its connection to the Halo games at the time. But it also hints strongly at a minimalist feature set, which I would very much prefer to the bloated nonsense that is Edge today. This is yet another area of debate—this Ask Paul is full of this kind of thing for some reason—but where some people prefer this kitchen sink approach, I, and I assume others, would rather have a stripped-down thing to which we can add the features we want. And those additions should come via extensions, which should work on any Chromium-based browser.

(Most of you are probably not familiar with this product, but that’s how Visual Studio Code works: the editor itself is fairly spartan, but you can add whatever functionality you want—and there are tons of it—via extensions. If you’re a Flutter developer, for example, you simply add the Flutter extension (which also adds the Dart extension to better support that language). If you’re a web developer, you have any number of extensions related to whatever framework you may be using and much more. And so on. This comparison is even more perfect because you can also sync your Code settings to your MSA or GitHub account, and these extensions will be auto-installed on other PCs when you sign in, just as with the browser, giving you your personalized environment every time.)

That said, in today’s world where we’ve pretty much standardized on the Chromium rendering engine (again, a debate, but I feel very strongly that this was the right decision), one key way that browsers can differentiate themselves is via their unique functionality. And this is the direction Microsoft chose, for better or worse. I think it was for the worse.

As I’m sure you all know, I use Brave, and this is the model I wish Microsoft had followed. Brave is a spartan browser whose key advantages are that it strips out all the horrible Google stuff and provides a secure and private experience by default (something Microsoft claims to do but does not, which is the ultimate betrayal). But I don’t see why we can’t have some middle-ground approach where Edge could basically be Brave, meaning secure and private by default, but with a nicer UI and some key, well-considered functional additions. More features could be added at the user’s choice via some first-run UI where you would pick and choose, and then that full feature set would appear on other PCs due to settings sync. This is so obvious to me, and I just don’t understand or agree with the direction Microsoft went.

Wilbur Chocolate vs Hershey Chocolate…what say you?

I don’t eat chocolate or even like it that much, but when I do, I lean toward high-quality dark chocolate. I’m of the opinion that Hershey doesn’t make anything high-quality. In fact, I think their milk chocolate tastes like plastic.

It’s the Edge of chocolates!

How high is too high?

madthinus asks:

Streaming services keep on raising prices, where do you see the walk away point?

So many thoughts.

First, yes. But it will vary by service, and the importance of any given service is subjective.

This is very much like the smartphone discussion above and is tied to the general discussions we’ve had about making decisions, which are rarely simple black/white things but rather some series of weighted sub-decisions where the answer will vary by person because we all value different things differently.

Before all the recent price hikes, I offered up one general strategy that I have not actually tried myself, though I was delighted to learn that two of my long-time friends are now doing this: just pick one service (in this case a streaming video service) and use that for a month or two until you’ve binged-watched everything you want. And then move on to a different service, doing the same. You could rotate between whatever number of services over whatever number of months, always paying for just one each month, and then repeat, as there will always be new shows over time.

This is harder or impossible with other types of services. Music, for example. But an interesting idea.

Regarding price hikes, many will recall that when YouTube TV started, it was only $35 per month, an incredible bargain that made switching from cable TV almost a no-brainer. But over time, the prices kept creeping up because of the channel bundle providers (“content creators”), and now it starts at $73 per month, or more than double the original cost. One might argue that there’s more value there now—more channels, obviously, but also better availability and quality, but many complained that this thing that was previously a no-brainer was now no better than cable TV.

But that’s not true or fair: YouTube TV is an online service, so you can subscribe to and unsubscribe to it at will, and if you want, again and again. Cable TV is still much more difficult to go into and out of. I agree that the price hikes there are terrible, but that central advantage is still there too. (For example, you may only want to watch football in a live TV sense, so you could just use YouTube TV or some other service during the season and not pay for it otherwise.)

Today, many complain that the price hikes we see across online services are problematic because of the economy or whatever. And they see some irony/hypocrisy in a service like Disney+ announcing price hikes after two straight quarters of subscriber losses, arguing that perhaps this service should be cheaper for existing subscribers, not more expensive. (Disney’s CEO has long argued that there is more value to this service than was expressed by its low price, so we’ll see how that works out. But he is right to investigate password-sharing blockers, as that’s just money being thrown away on Disney’s part.)

Looking at music, Spotify argued for years that $9.99 was unsustainable as a price point because it didn’t cover their costs (both infrastructure and royalty payments). But $9.99 was this magic price point for consumers, sort of like 99 cents was for songs for a while there. And moving past that was difficult because Apple subsidizes its competing service, which makes it hard to charge more and retain customers. But prices did finally go up—Spotify Premium starts at $10.99 now—and these services offer more expensive tiers for families, plus lower-cost or free tiers for students and others.

I don’t like to see all these price hikes, but it’s perhaps fair to claim that video streaming services like Netflix, Max, Hulu, and so on were likewise under-valued, that the millions and millions of dollars that these companies invest in original content has to come from somewhere. And that when the market reaches its natural size with little room for growth that the obvious strategy is to start earning more per customer. Disney is pretty explicit about that—its earnings statements communicate its earnings per subscriber per service in great detail–but it’s true of all services. And, really, all Big Tech companies. Apple’s services push is all about making more revenues per existing customers, right? Microsoft is trying this in Windows 11 with crapware bundling, by pushing premium services (Xbox Game Pass, Microsoft 365, OneDrive paid tiers, etc.), and by directing users to Edge, MSN, Bing, and its advertising engine. Etc. This is really about the market being mature.

So given all this, how much is too much?

It varies by service. And by the individual. And if we’re thinking about this clearly, we will do some hard math based on their real-world use of these services and the cost.

I am a worst-case scenario here (or, for the industry, a best-case scenario) because I pay for a lot of services but only use some of them “fully” (or to some defendable degree) each month. And it’s one of those things I almost don’t even want to confront. There are lots of excuses, too. I have kids who watch/use these services, for example, and they don’t live at home, so I don’t even know how much they use each. But with prices going up, it’s time to cull.

Some of this is more nuanced, too. I barely ever watch anything on Amazon Prime Video, for example. But that’s part of Amazon Prime and we do use that service a lot, so we’re not getting rid of it.

I’ll be looking at this soon. I pay for a Disney+/Hulu (no ads)/ESPN+ bundle, and while my wife and I actually watch a lot of Hulu (for binging TV shows at lunch), we never use the other two services. So I will extract that out of the bundle and get rid of Disney+ and ESPN+. We do watch Max sometimes but not enough. So that’s on the way out soon too.

Apple TV+ is inexpensive and I’ll likely keep it. Netflix is, to me, the key video service, and would be the last one standing if I went on a major downsizing. Each requires some thinking.

And then there’s music. I pay for YouTube Music (which gives me YouTube Premium, which is key). My kids and wife use Spotify, so we pay for that too. They will not use YouTube and I will not use Spotify. Each of these services has had or will soon have price hikes.

We can probably lump cloud storage in here too. My wife and I both pay for extra Google storage for photos. We pay for Microsoft 365 Family and use that storage for work. My kids and I use iCloud for device backups and so I pay for that too. Can I kill any of that? Honestly, no.

But that’s me (and is only a partial list). And what a mess it is.

Google v. Microsoft

oasis21 asks:

With your take on Google’s Project IDX using Code OSS and Google not acknowledging Microsoft/VSCode, it got me thinking – what’s with the beef between Microsoft and Google? Why don’t they get along? I think an article exploring the historical relationship of these companies would make for an interesting read. I’d love to hear your take on this interesting piece of history.

There is this notion of institutional memory, and in Google’s case it’s that the company was born in an era in which Microsoft was still dominant, and so Google’s co-founders, Larry Page and Sergey Brin, were worried that the software giant would do to it what it had done to so many other innovative startups and kill it before it had a chance to succeed. Page and Brin are pretty much long-gone at Google (though there are ridiculous stories of these now out-of-touch individuals for some reason becoming more hands-on now that Google has an AI perception problem), and Microsoft is today a very different company, but the mentality lives on. And you can see it in all kinds of things.

Everyone probably knows, for example, that not only did Google refuse to support Windows Phone, but that it blocked third-party YouTube clients from working on that platform. More recently, Google introduced a product called Play Games for PC in the wake of Microsoft’s Windows Subsystem for Android (WSA), and when their PR reached out to me, they were very explicit that this offering, then mysterious, was “a standalone Windows PC application built by Google” that had nothing to do with WSA. Over time, we came to understand that it was an Android emulator, just like WSA, and that rather than working with Microsoft on this, they chose to simply do their own thing.

What’s interesting is that Microsoft, generally speaking, has evolved to be very open to working with Google. Its partnership on Chromium, for example, got some feel-goods from the Google underlings in that part of the company, but never from executives or decision-makers. And more recently, Google has taken to quietly adopting some Microsoft-created features in Chrome, with little or no acknowledgment. This is what happened with Project IDX, of course, but we’ll see if the messaging evolves as it becomes more public.

I am probably naïve on this topic. But I would prefer a world in which the companies we rely on respected each other and partnered where appropriate. And in this case, explicitly, I see an opportunity for two big companies with common enemies (like Apple) to link up where it makes sense and in a bigger way than the one-sided relationship on browser technology. For example, Google could put its PWAs for Gmail, Google Calendar, Photos, and whatever else in the Microsoft Store for Windows 11. It could bring Google Play Store to WSA. The two could work together so that Google Nearby Share wasn’t a standalone app but was rather integrated directly into the Nearby Sharing feature in Windows, by bringing Google Photos support to the Windows Photo app, and so on. These changes would benefit Google, not hurt it. And they would benefit the company’s many shared customers.

But this won’t happen because Google still has this institutional fear and hatred of Microsoft. And while Microsoft cannot hurt Google’s mobile offerings, and Google is not competitive with Windows, the two are very much at odds in the cloud. And maybe that’s the issue. Microsoft dominates in productivity (Microsoft 365 vs. Google Workspace) overall. Google dominates in search/advertising. Each wants a piece of the other’s pie. And … I don’t know. In a weird move by Microsoft, AI was perhaps an “I told you so” moment for Google, something that justified its previous two decades of paranoia. I mean, come on. Bing is still ridiculous and Microsoft’s investment in OpenAI does not give it exclusive access to anything … but OpenAI very much does rely on Azure.

And so round and round we go. Again. It’s too bad.

Gain unlimited access to Premium articles.

With technology shaping our everyday lives, how could we not dig deeper?

Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.

Tagged with

Share post

Thurrott