Microsoft AI Chief Succeeds Where Copilot Does Not ⭐

Microsoft AI Chief Succeeds Where Copilot Does Not

Copilot may be the least respected AI in Big Tech, but Microsoft AI chief Mustafa Suleyman continues to impress. He’s the Phil Spencer of AI, a plain-spoken and honest human being who in this case succeeds at making sense of AI in ways that anyone can understand and with little of the robotic coldness we see in Sam Altman and other AI leaders. In a world shifting inexorably to AI capabilities and workloads, he may be Microsoft’s greatest single asset.

I’m serious. Suleyman is the antidote to a very real problem: Big Tech’s incessant over-marketing of AI, over-spending on AI infrastructure, and over-promising of capabilities that are in no way ready for prime time are undermining customer trust. Microsoft, sadly, is no stranger to trust problems. But it seems a bit tone-deaf to this reality, and its heavy-handed AI push is alienating customers, eroding trust further.

This is unfortunate because we’re inundated by new AI models and the useful new capabilities they bring on an almost weekly basis. And yet, there are still AI deniers and doubters out there, babbling incessantly about hallucinations, “slop,” and whatever else, driving a wave of misinformation that feeds on distrust and then needlessly amplifies it. It’s not clear which is less desirable, the noise from this crowd or the noise of Big Tech pushing AI so aggressively.

They’re both terrible. So last week, I wrote When AI Works ⭐ to cut through all that noise. It’s a look at what AI is and is not capable of right now that highlights about a dozen high-level areas in which improvements from AI are undeniable. I am not a gullible AI cheerleader, but I am also not a change-averse Luddite trying to prevent progress because I feel threatened. As I do in all things, I have a centrist view of AI, which is just technology. And while there are exaggerations and a kind of “fake it until you make it” mentality that pervades in this industry, there are also many examples of where AI is already making our lives better by saving us time and/or money while opening up new, previously unimagined new capabilities.

That is not how Microsoft or any other Big Tech company promotes AI, of course. And that is why Suleyman is so special. He clearly understands the Microsoft AI marketing playbook from front to back, but he just as clearly can’t bring himself to not be open and honest about what works and what doesn’t. He isn’t afraid to call BS on anything Microsoft or its competitors are doing. And because of his history, he’s not just an expert, he’s an insider. He is, in short, someone I feel we can trust.

The most recent example of Suleyman’s uniquely human approach to AI comes via an extensive interview with Bloomberg. (In addition to the article and video within, there is a longer version of the interview in podcast form on iHeart Podcasts, Apple Podcasts, Spotify and elsewhere, too.) I strongly recommend that everyone reading this listen to that or the printed interview if possible. But here are some key excerpts that I think drive home why this man is so important.

Agentic AI

In When AI Works, I specifically called out agentic AI as “what’s not there yet,” meaning that while the promise is big, the reality is not. And I was interested that the interviewer from Bloomberg jumped on this most heavily marketed of AI capabilities very early in the talk.

“We’re still experimenting [with AI agents and automation],” Suleyman said. “It can do it. It doesn’t always get it right. It’s in ‘dev mode,’ so not generally available just yet. When it does work, it is the most magical thing you’ve ever seen. It essentially types stuff into your browser, clicks on buttons, opens up new tabs. It can look at your history, [and] personalize the purchase or the response to you.”

When asked about the mistakes AI agents can make, Suleyman interestingly uses a tactic I had independently come up with on Windows Weekly two episodes back while we were debating AI and its bad reputation. He reminded the interviewer that AI is just technology. And technology isn’t “good” or “bad,” it’s just something that can be used to help people get something done.

“Well, it can buy the wrong thing, but you can intervene,” he answered. “And it will always ask you permission before it takes the next action, so it’s quite safe. It’s a funny thing, technology. It’s magical and amazing, but it’s always just got a little bit further to go. In this case [agentic AI], a while yet before it’s everyday.”

Humanist vs. Superhuman

Back in November, Suleyman penned a blog post explaining how he (and thus Microsoft AI and Microsoft more broadly) view the race to so-called “Artificial General Intelligence,” or AGI. This is the stated goal of OpenAI, which is obviously a crucial Microsoft partner. But Suleyman thinks that AGI is not a goal at all, regardless of whether it’s achievable. So Microsoft AI is pursuing something better, something more practical, called Humanist Superintelligence (HSI).

Confusing matters, there’s also another term, Superintelligence, that’s used as an alternative to AGI. And in this case, the two terms seem interchangeable. Superintelligence is essentially the point at which some AI “can learn any new task and perform better than all humans combined, at all tasks,” Suleyman says. This is both a very high bar and a very risky, and he says it’s not clear how we could “contain and align” an AI like this that is so much more powerful than humans. And so he is pushing HSI instead.

Humanist superintelligence [is AI] that is always in our corner, on our team, aligned to human interests,” he explains. “Until we can prove that it will remain safe, we won’t continue to develop a system that has the potential to run away from us. Everybody should agree to that. Yet I think it’s a novel position in the industry at the moment.”

This, he says, aligns with Microsoft’s position in the industry, as it is trusted in the enterprise.

“Microsoft is a company that’s been around for 50 years,” he says. “It is very careful. It’s highly trusted: 90 percent of the S&P 500 use us to provide email, operating systems and everyday productivity. We’ve got that reputation because the company’s been careful. We’re going to continue to be careful, and setting out a vision of humanist superintelligence is part of that program.”

Partners and competitors

Throughout this interview, Sulleyman is asked about specific competitors, including the leaders of a few companies, and how they compare with Copilot and Microsoft’s other AI offerings. This is an interesting area because Microsoft is obviously competing with several big players while also partnering with them directly or in trying to push through various standards.

And there is no more complicated Microsoft relationship than the one it has with OpenAI. Consider the previous section, where Suleyman and Microsoft are pushing a more moralistic AI, essentially. So what does that make OpenAI? Evil? Or, as the interviewer puts it, the Wild West? Neither is good: If OpenAI is the Wild West it is, at best, chaos.

“Everybody has to decide what they stand for and how they operate,” he says. “I don’t want to judge how they’re operating right now.”

Diplomatic. But I, for one, would enjoy hearing that judgment. (There is more on Microsoft and OpenAI in the next section.)

“I don’t see any evidence of large-scale mass harm,” he says, I guess of AI broadly. “I don’t see any indication that these things are improving themselves, or operating autonomously. We all predict a time in the next five years, maybe 10 years, where these capabilities do start to emerge. Systems like this could set their own goals. They could improve their own code. They could act autonomously. Those are capabilities that I’ve clearly outlined as increasing the level of risk. We have to approach them with caution, with more transparency and audits, with more government engagement, and make proactive declarations about how close we are to those three capabilities. I think that’s obvious.”

Suleyman says that his peer group, which is basically the CEOs and other leaders in AI and Big Tech companies, is a small group and that everyone knows everyone, and many have worked at one time or another with most of the others. He had never heard the term broligarchy (neither had I), but he agrees that AI, like tech in general, is too male-centric. He does point out that ex-OpenAI CTO (and Thinking Machines founder) Mira Murati is one of the best people in his field. But his one off remarks about a few other AI leaders are interesting.

  • Sam Altman (OpenAI co-founder and CEO): “Courageous” is the first thing that comes to his mind. But check out the qualifiers on this bit. “He may well turn out to be one of the great entrepreneurs of our generation … if he can pull it off, it will be pretty dramatic.”
  • Demis Hassabis (cp-founder and CEO of Google Deepmind): “Great scientist, truly exceptional.”
  • Elon Musk (idiot): “Bulldozer.” “He probably has a different set of values.” Yes, if different means no values.

The Microsoft/OpenAI relationship

Suleyman revealed a detail about the Microsoft/OpenAI partnership that I’m not sure I understood explicitly. As you may know, Microsoft was the early initial major investor in OpenAI and the partnership between the two companies has always been secretive. But the two companies have made two major (public) changes to their relationship over the past year. In January, Microsoft and OpenAI restructured their partnership, allowing OpenAI to seek other cloud infrastructures if Microsoft wasn’t interested. And in October, OpenAI restructured into a for-profit company with Microsoft owning 27 percent.

Those changes were in the works for many months. But this is also tied up in Suleyman’s arrival at Microsoft in early 2024 and what it is that he and Microsoft AI are trying to accomplish. As noted, some of this is new to me.

“Up until a few weeks ago, Microsoft was not allowed by contract to pursue artificial general intelligence [AGI] or superintelligence independently,” he explains. “The deal with OpenAI was that it would then go and build AGI when they signed the agreements back in 2019, and in return Microsoft would build the AI infrastructure — the chips and the data centers. Microsoft would get a license to the models that have been built. And we still have that license to everything that OpenAI builds, up until 2032.”

“But OpenAI decided that they wanted to take on more compute, and buy compute from other providers,” he continues. “They now have deals with SoftBank, and many others, to build more data centers than Microsoft wanted to build for them. In return, we have the right to go and develop our own AI. Obviously that was a big part of me joining the company 18 months ago. We are now hiring a superintelligence team, and pursuing our own AI development.”

He claims that Microsoft could still have done a lot without that change—“We have $280 billion of revenue,” he points out—but “now [Microsoft AI] can work on some techniques and methodologies that have the potential to exceed human performance at all tasks. So it is a shift for us.”

In short, Suleyman was hired specifically so Microsoft could develop what it’s now calling Superintelligent AI independently of OpenAI. When he was hired, Microsoft was contractually unable to do that. But with the changes to the partnership, it now can. And that sets up the inevitable split with OpenAI.

The AI circle jerk

I‘m very critical of the massive sums of money that Big Tech is throwing at AI, the opaque partnerships that seem to now exist between almost every single company in this market, and the massive sums of non-existent cash that each has committed to the others. Suleyman seems a bit more sanguine on all that, but he says the clock is ticking.

“It’s something to watch,” he says, referring to how much of the economy is wrapped up in this small group of companies throwing money at each other. “I’m definitely watching it carefully and I think others are too. Getting the balance right is very important. We have to deliver in the next few years. Every team is building incredibly large, very powerful computers and we’re taking a huge bet that we’re going to be able to convert this into true intelligence. If we do, then I think the world is going to look very, very different. We will have abundant intelligence on tap.”

And he does claim that the costs of AI are going down, and much more dramatically than I have ever understood.

“It costs 90 percent less to ask a question of one of the best AI models in the world than it did two years ago,” he says. “When the cost goes down, everybody gets access.”

Where AI will be most useful to humanity

In technology markets, coding help has emerged as the first major win for AI, and it’s clear that this is only improving with time. And that most new software code will soon be written by AI and reviewed by humans, rather than the reverse. It’s been a while since I’ve thought about or discussed this, but imagine a future in which Microsoft is able to automate the refactoring of Windows kernel source code to Rust, or whatever. And then apply that same across any and all software systems, from personal tech to smart home to industrial and everything else.

But what about the wider world? How and where will AI be the most useful for humanity? Here, Suleyman has a clear answer: Medical and health.

“This is probably the most exciting application of superintelligence,” he claims. “We now have systems that can diagnose any rare condition found in the [medical] literature, significantly better than human performance, more cheaply, with fewer tests and with higher accuracy. We are putting it through independent peer review at the moment and soon there’ll be clinical trials. So this is very, very, very exciting.”

“This is an area that’s very important to me,” he continues. “My mum was a nurse and I’m just a big believer that technology is here to serve us. It should make our lives better, make us more comfortable. One day, I think it is going to help us to live longer. It’s going to give us the option to work less if we choose to. It’s going to produce abundance. We have to make conscious decisions to use it for those applications first.”

The abundance comment naturally leads into a discussion about wealth distribution and what it is we will do, as people, when so much of the drudgery of work today is taken away by more efficient and less expensive AI systems. And he feels, as I do, that universal basic income is inevitable, though he doesn’t address the most obvious first step that the U.S., unlike many nations, has taken yet, meaning universal health care.

“We have to decide as a society what our purpose is,” he says. “We have to be very thoughtful about the rate of introduction of new machines, because we have to make sure that displacement is counterbalanced with a mechanism to fund people and to support people through a massive transition.”

“I’ve long been on record saying that [AI will unlock the need for a universal basic income],” he adds. “That is inevitable and very desirable. We already live in a world of abundance, it’s just poorly distributed. Value isn’t just manifested in atoms [like] food, cars, [and] physical things. It’s manifested in digital goods [like] ideas, knowledge, and intelligence. That’s actually great news because that can proliferate; it can spread extremely quickly around the entire world. LLMs and chatbots have been the fastest spreading technology in history — basically 2 billion annual users in the space of three years. There’s going to be massive competitive forces to reduce the cost of experiencing an AI. The challenge we’re going to have to figure out is how we tax and redistribute, so that the transition is a healthy one.”

Later in the interview, he comes back to this topic.

“I really want to nail medical superintelligence,” he says. “I want to do more in energy efficiency and battery storage, developing new compounds for renewables. I think that AI will really transform the energy industry. I’m actually very proud of a lot of the use cases in Copilot. Many people are using it for companionship, therapy, making difficult life decisions. It’s given me high-quality access to information and emotional support, and is helping keep me organized.”

This obviously led to a question about what that even means.

“At the end of the day when I’m in the car driving home, I have a 10-minute conversation with Copilot about something that was tricky, or something I felt frustrated about,” he says. “Maybe emotional support’s a little strong, but it’s like having a chat with a friend, downloading what went well and what didn’t. Copilot now remembers most of what you say and it will personalize its answers to you, and refer to something that you said last week, for example, or a trend or pattern. That is super helpful. I feel refreshed after a conversation. It’s like a burden that I’ve released.”

Where Copilot stands compared to the competition

As noted up top, Copilot is widely ridiculed, used far less than ChatGPT, Gemini, and other AIs, and criticized for Microsoft’s heavy-handed push on customers. But Suleyman’s comparisons of Copilot and its competitors is interesting.

“[Gemini 3] is good,” he says when asked about how it compares to ChatGPT, the market leader. “They’re [Copilot and Gemini] kind of different. [Gemini] definitely got more niche skills that ChatGPT doesn’t have, and it’s very fast. But ChatGPT is very strong, so I wouldn’t go that far.”

But is it better than Copilot?

“[Gemini 3] can do things that Copilot can’t do, but Copilot also has features that it doesn’t have,” he answers. “Copilot is actually amazing for vision. It can see everything that you are seeing and talk to you in real time. You can share your screen with Copilot on mobile or desktop, talk about it and get feedback. We’re really trying to imagine the day-to-day experience of having this really intelligent assistant at your side, that can help unblock you whenever you get stuck … I still call my best friends every weekend and have a good old chat. If anything, it’s actually deepened some of my relationships with my friends. I come to those conversations feeling a little lighter.

Politics and the importance of regulation

“I’m sort of a centrist these days,” he says. “I definitely started as a lefty. I worked for [former mayor of London] Ken Livingstone back in the day, and was frankly, very inspired by a lot of those people, even though they also made a lot of mistakes. But I’m proud to say that I’m on the center-left of the spectrum. I believe that government plays an important role in society.”

Which leads naturally to regulation.

“It’s a controversial thing to say in Silicon Valley, but I think regulation is necessary and it has made most technologies better,” he says, correctly. “People forget this. Cars only work because we have driver training, emissions regulations, streetlights and speed limits. That’s what regulation is when it works well. We just need more of that.”

Yes we do. Fortunately, Big Tech regulation is at a wonderful apex right now, and making positive changes for people while reining in the worst behaviors at these companies.

AI and job loss

Suleyman handles another of my pet peeve AI-based topics with predictable aplomb. But fears of job losses tied to AI are overblown, knee-jerk reactions. The AI we’re getting will help us do the jobs we want to do, and remove the jobs no one wants to do anyway. But we’re also ignoring some benefits no one is talking about, and I like that this example is something I can relate to.

“There are going to be AI reporters,” he says. “I run MSN at Microsoft; it’s one of the largest news sites on the planet. One of the things I’m very excited about is how AI reporters can reinvigorate local news. Imagine there are hundreds of thousands of AI reporters that can make phone calls to people who are at the scene, who can verify eyewitness footage, conduct interviews, stitch those together into little montages, and not just do it for big national stories, where the investment is justified, but do it at a very local level — to provide accurate and factually reliable information.”

I also like that the interview ends with a joke.

“You’ve got a little bit longer,” he answers when the interviewer asks him whether there will be AI interviewers. “Maybe six months.”

And then he laughs. “I’m kidding,” he says.

Because my wife spends so much time each week interviewing people, I have a random insight into where AI can make this process so much better for her and people like her. She used to take notes in real-time and try to capture as much of what the person said as possible, a nearly impossible task. Eventually, she recorded these meetings and then had to laboriously hand-transcribe the conversations. And then she could push them through a service to do this, at first with lots of errors. Now, the software she uses just does that automatically, and accurately. And so she can focus on the writing of the resulting article, all while asking AI to list key themes or points, summarize an argument, verify claims with links to research, or whatever. She still has a job and she is using AI in ways that help her.

In other words, it’s only gotten better over time and it has saved her time and money. Interesting.

Gain unlimited access to Premium articles.

With technology shaping our everyday lives, how could we not dig deeper?

Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.

Tagged with

Share post

Thurrott