
Yet another strategy shift for Copilot may not seem that interesting, but today’s news arrived with a belated nod to an inconvenient truth: Many customers don’t trust Microsoft. and they’re leery of AI. And the speed at which the software giant has jammed AI into every nook and cranny in its ecosystem isn’t just disconcerting, it’s scary. And it feels like Microsoft has been making it up as it went, casting aside the understandable concerns we have about privacy and security.
This ignorance was clear in during Microsoft’s Copilot+ PC announcement this past May. When Yusuf Mehdi launched into his starry-eyed description of Recall, a feature that would use AI to “access virtually anything you have ever seen on your PC,” I immediately saw how many in the Windows community would react to this incursion. Detailing why this was safe was key to answering the critics, and I assumed Mehdi would do just that. But after a demo, he simply said that Recall was built with “built with responsible AI principles,” which is meaningless, and “aligned with [Microsoft’s] standards,” which could be construed as malicious, given the company’s chaotic push to dominate the AI era at all costs.
It wasn’t all bad. Microsoft wouldn’t use Recall data to train AI, he said, and that data would stay on the PC where it would be private and secure and under the user’s control. But distrust of Microsoft is so widespread that these claims were mocked–surely, it was only a matter of time before that data moved off the PC, by design or because of hacks–and then undermined when hackers bolted Recall onto non-Copilot+ PCs and reported that this system wasn’t as secure as Microsoft promised.
I remain critical of that episode for all the right reasons, but sitting in the audience during the announcement that day and not privy to the bad news to come, I was worried. Microsoft wasn’t just ignoring the distrust it had earned. It didn’t seem to even understand how its customers felt about it and AI. Quite the opposite.
What happened, happened. Microsoft delayed Recall, delighting critics who hoped that perhaps it would never resurface. Robbed of their marquee new feature, Copilot+ PCs launched with few compelling AI features and were unfairly criticized for compatibility issues that won’t impact most users. And the summer progressed with no real news about Recall or when or how Copilot+ PCs would improve.
Today, we have answers.
Six months after Microsoft launched this new platform, Microsoft recently announced the new schedule for Recall, which will indeed ship soon in preview and with some privacy and security upgrades that will do little to silence its biggest critics. Today, it announced new unique features for Copilot+ PCs, like Click to Do, AI-powered File Explorer search, and Super resolution in Photos. And for what feels like the 17th time, it is updating the Copilot app in Windows 11 and elsewhere–for all users–with a new look and feel, and new capabilities.
And whatever. Aside from Recall, which, despite the criticisms, will benefit mainstream users great, the Copilot+ PC and Copilot updates feel like more of the same, iterative changes delivered in the hope that maybe something here will finally resonate with customers. Maybe they will.
But the bigger deal here, I think, is that Microsoft has finally turned on the trust spigot. We saw this first with the recent Recall update, and while most of what it finally discussed in great detail was already happening or planned, it was exactly the level of transparency and explanation I felt this feature deserved back in May. Microsoft had belatedly gotten the memo.
Today’s announcements don’t go into this level of implementation detail–most of which is over most peoples’ heads anyway, but feels like someone must know what they’re doing–but it does amp up the language around a very crucial component that was missing last May: Why. Why should Microsoft’s customers use these features? Why should they trust them or the company? And if they decide to not use these features, why should they trust that Microsoft won’t just silently enable them behind their backs?
This isn’t theoretical. We’re closing in on the one-year anniversary of when I first raised the alarm on Microsoft silently enabling OneDrive Folder Backup after users had repeatedly said no to this feature. It did that specifically to help its AI efforts. And what it’s doing now with AI is vaster, and is more critical to the company.
Today’s announcements were also a soft launch, if not a coming out party, for the Microsoft AI organization that the software announced in March in what was perhaps the most surprising development of the year. That org is led by and staffed largely by outsiders–DeepMind and Inflection co-founder Mustafa Suleyman, Inflection co-founder and chief scientist Karén Simonyan, and most of the former Inflection team–and, by accounts, it was hastily assembled to provide an alternative path forward should the volatile and chaotic OpenAI that Microsoft relies on far too much finally implode, a distinct possibility.
In sharp contrast to the chaos Microsoft has delivered starting with its Bing Chat announcements in February 2023–which, incidentally, feels like it occurred 10 years ago–Mr. Suleyman says that his new employer can deliver a “calmer, more helpful, and supportive era of technology,” one that will adapt to you over time. But it will only do so “with your permission,” he notes, acknowledging that “some people worry that AI will diminish what makes us unique as humans.”
Suleyman also explicitly states that AI is a “wave” of computing, following the web and mobile waves that Microsoft lost to competitors. And while this wave will take years to come to fruition, the new Copilot features Microsoft just announced are “the first careful steps in this direction.” It’s almost like there’s an adult in the room for the first time.
“Patience and care with our deployments are at the very foundation of our approach,” he says. “My commitment is to be accountable at every stage, work with you and listen to you. Respect and deep compassion for our users and for society is the core purpose behind everything we do. It comes first.”
In its description of all the new Copilot features, Microsoft notes that “safety and security are [its] top priority,” which is nonsense without supporting information. But this time, Microsoft provided it.
Looking just at Copilot Vision, the list of protections is impressive. Copilot Vision sessions are entirely opt-in and ephemeral. Nothing that Copilot Vision engages with is stored or used for training, and all data is permanently discarded when you end a session. It won’t work with all websites because Microsoft finally gets that a lot of private activity happens on the web; it’s “starting with a limited list of popular websites to help ensure it’s a safe experience for everyone.” Copilot Vision won’t work on paywalled and sensitive content. There’s no processing of website content, and no AI training. Copilot Vision simply reads and interprets the images and text it sees on the page in real time. And that’s just during the preview. If needed, Microsoft will institute further protections based on feedback.
There are more questions, of course. Including the obvious. Is it enough?
I can’t answer that. It will be for some and not for others. But acknowledging the problem is a good first step. Taking the concerns to heart and implementing positive change–who doesn’t love the term opt-in?–is even better. And Microsoft does appear to be heading in the right direction.
We’ll see. Trust is difficult to win. But it’s even more difficult to win back after you’ve betrayed it.
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.