
Well, it’s always nice being right about something, but the epic failure of Bing AI this past week makes Microsoft’s decision to go live with it now all the more troubling.
As I’m sure you know—Bing, paradoxically, has dominated the news cycle for the past 10 days or so—Microsoft last week introduced what I’ll call Bing AI—a ChatGPT-infused version of its search engine with a new chatbot feature—at a very strange event at its Redmond campus. This event was not broadcast live, which should have set off alarm bells. And it was entirely scripted, with presenter Yusuf Mehdi replaying canned videos of Bing AI interactions rather than doing so live; this, too, should have set off alarm bells, especially among the attendees of this event, who should have known better.
But did not: as I pointed out in Proprietary (Premium), some of those who attended appeared to literally lose their minds. And not to pile on this one guy repeatedly, but I have to because he writes for the paper of record, The New York Times, and has completely changed his tune on this technology just as I had predicted. Sorry, Kevin. But you brought this on yourself. And it’s a good example of what we’re seeing more broadly out in the world as people gain more experience with something that seems to demo really well at first. But then betrays you because this kind of AI today is nonsense.
The NYT’s Kevin Roose last week “was so blown away by Microsoft’s presentation that he’s immediately switching to Bing (‘yes, Bing,’ he elaborates),” I wrote last week. “I think it’s a bit early to discount how Google will respond, but whatever. His report raises some issues about the world’s ability to understand what’s really happening here.” He then misreported what he saw—the demo was not live—and walked away from this experience just blown away.
Flash forward one week. Mr. Roose has returned home from the Redmond bubble and has had a lot more time to spend with Bing AI. And as with just about everyone else who has done so, he’s seen some troubling things. Some deeply troubling things.
“A week later, I’ve changed my mind,” he now writes. “I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.”
A couple of things to that. I immediately predicted that his switch to Bing was temporary and ill-advised. And I reported last week that everyone who was so excited by the advances in Bing AI seemed confused about where those advances really come from: they come largely from OpenAI, not Microsoft, as Roose now admits or realizes. (In fact, one might argue that Microsoft’s contributions to Bing AI are its weak link. But whatever.)
“It’s now clear to me that in its current form, the A.I. that has been built into Bing is not ready for human contact,” he continues. “Or maybe we humans are not ready for it.”
Sigh.
Mr. Roose’s turnaround occurred one week after the Bing AI event, after he spent “a bewildering and enthralling two hours talking to Bing’s A.I. through its chat feature … Over the course of our conversation, Bing revealed a kind of split personality.” He calls those personalities Search Bing and Sydney (for its codename). But Bing itself has more accurately called its two personalities Good Bing and Bad Bing. Yes, really.
“I hope you can forgive me,” Bing AI pathetically asked one user it had been berating incessantly.
To be clear, Roose isn’t alone and I’m sorry to pile on him specifically. It’s just too perfect of an example. (Again, sorry Kevin.) But other people who should know better have also fallen for this stupidity. Stratechery’s Ben Thompson is normally one of our more insightful commentators, but he has fallen in love with Bing AI despite it telling him that it was OK for it (Bing AI) to retaliate against someone who harmed it. What the F’ing what.
I’ll leave you with this stunning response from Microsoft.
“The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, for example, the length or context of the conversation,” a Microsoft statement reads. “As we continue to learn from these interactions, we are adjusting its responses to create coherent, relevant, and positive answers. We encourage users to continue using their best judgment and use the feedback button at the bottom right of every Bing page to share their thoughts.”
Here’s the thing. Bing AI delivering “coherent, relevant and positive answers” is the baseline, and this thing should never have been unleashed on the world until that was always the case. For now, Bing AI seems no better than the racist Tay, Microsoft’s previous attempt at an AI chatbot, because it’s so easy to wind up and take down a dark path. Microsoft has learned nothing.
Or, maybe they have now. My central concern during this entire episode was about wondering why on earth Microsoft would have moved so aggressively to push this AI advance to the public when it was clearly not ready yet. I assume the quick turnaround in public opinion will help shape future decisions like this. Because there is no version of this story where what Microsoft did is OK. And no version of this story where this technology gives it a leg up on Google or any other competitor.
Indeed, we all piled on Google for its Bard AI missteps. But this is worse. Bing AI is irresponsible, a purveyor of misinformation that could harm people. And if there’s anything we need less in this day and age, it’s another liar with a public platform.
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.