
There’s a scene in the original The Matrix in which Trinity, one of the movie’s protagonists, calls in to the operator and tells him she needs “a pilot program for a B212 helicopter.” The operator does some typing and within seconds he wires Trinity’s brain so that she knows how to fly that helicopter. And off we go.
Science fiction like The Matrix can be escapist fun, and this scene is a great example. But watching this in 1999, we saw it for what it was, a fictional and impossible scenario, and part of the world building of that film. Flash forward over 25 years, however, and The Matrix feels a bit different. Not like a documentary per se. But more like a premonition of where technology would take us.
No, we’re not going be jacked into a computer system that will directly wire our brains anytime soon. Though, be honest; that feels more realistic and possible today than it did in 1999. But we’ve moved quickly from Internet search, which lets us find the answer to any question, to an AI-based reality in which we can now make almost anything to solve almost any problem.
As is always the case with technology, this capability was at first a theory, and then the result of research and experimentation, and then it moved into what I still think of (inaccurately, I’m sure) as the white lab-coated world of the tech elite before being fed to the unwashed masses. It started innocently enough with grammar checking, AI-based writing tools, and image generation. But it’s evolved quickly to include conversational iteration, the ability to keep working with AI to solve a problem without having to start over from scratch with each prompt.
But this is different. Something radical is happening.
Consider my go-to explanation of why generative AI can be so useful. I have this website, Thurrott.com, and we publish several articles every day, each of which needs a thumbnail or hero image of some kind. If we’re covering an industry news story, there’s a good chance that the source of that news provides images we might use for this purpose. But we used to subscribe to services that provided massive collections of stock photos we could use as well. Now, I could simply use whatever AI to generate an image, and not only is the quality incredible now, with massive improvements over the past few years, but that image is unique to me. No one else will use it.
But what about the jobs, you ask? Well, I do have a few graphic artist friends, and I’m sure either would be happy to take me on as a client. But I can’t afford that, and they cannot generate images to my exact specification like AI can, and certainly not at the speed AI can. They can’t sit there with me and iterate through a design until it’s exactly right, they’re time is too valuable. What I’m doing isn’t stealing a job, that job never existed.
For AI to be effective–meaning, for it to benefit mankind by taking away drudgery or busy work–it needs to save us time and save us money. Generative AI’s ability to create static images for the site does both. It happens quickly and can be easily adjusted if the initial effort isn’t perfect, and I don’t have to pay for those services anymore. Indeed, I can use AI to create images for free if I’d like.
There are other workflows in which I don’t use AI. I will never use AI for writing, and while losing that ability was a natural fear for writers as AI exploded over the past few years, I feel like the balance has shifted and that we’re coming to understand how and where AI makes sense and where it does not. For example, I own this business, Thurrott LLC, but I hate it. I just want to write. What if AI could streamline the business-related activities I must undertake, saving me time, money, and drudgery, so that I can focus on what I actually care about, writing? That’s the dream.
But it’s also quickly becoming the reality. Using AI to create an image–or a video, a song, or a podcast version of some writing, or whatever–is one thing. But what about creating something more sophisticated? What if AI is the key to a Matrix-like future when we can ask it to just make something that occurs to us in the moment? Something … more impressive.
This ties into so many of my focus month topics this year that it’s almost comical. Consider a few examples.
This month, I’m looking at alternatives to Windows 11. Each has its pros and cons, as they would, and some are better than others depending on your needs and expectations. In Switcher 2026: Coping With the Mac ⭐️, I mentioned that one of my hang ups with the Mac is that I can’t find suitable replacements for Microsoft Paint and Notepad. Today, the “solution” is to spend $80 to $120 a year on Parallels Desktop so I can virtualize Windows 11 and run those apps alongside Mac apps, which takes up RAM and disk space. But in testing Chrome OS, I discovered that there was a very simple Gallery app that did much of what I use Paint for day-to-day. And that made me think: Could I just vibe-code apps for the Mac that do what Paint and Notepad do? If so, I could drop the virtualized Windows 11, save money, and get my daily work done efficiently on that platform.
I used to use various versions of Adobe Photoshop for more advanced editing needs, and these days I use Affinity. That app is free, which is incredible, so it’s already saving me money on one level. But Affinity, like Photoshop and other creative apps like this, is complex. There are workflows I don’t use enough to master, and sometimes I have to relearn skills when I try to do something more complex. But in the sense that AI can save you time, money, and drudgery, Adobe just announced its Firefly AI Assistant, and it could disrupt Creative Cloud in the same way I recommended today in Ask Paul that Microsoft attempt with Copilot. Instead of having to master a complex tool, a process that takes time and is literally drudgery, you can use natural language to just tell Firefly what it is you want to do. That’s incredible.
I just spent almost two years trying to create a modern Notepad alternative using Microsoft developer frameworks. I did some impressive (to and for me) work along the way, but I never did get the multiple document/tab thing to work correctly, and so I turned to AI. I failed in the end with straight GitHub Copilot and then Anthropic Claude, but once I switched to Clairvoyance, it clicked. In less than a month, I had done it. And by “I,” I of course mean Clairvoyance did it, based on the work I had already done. The thing is, in talking to Brad about Clairvoyance before I had gotten access, he told me something interesting. He was using the tool, too, of course, but he wasn’t using it with a traditional developer editor or IDE. He was only using Clairvoyance. As a non-developer, Brad knew what he wanted to create, but he also doesn’t have the programming skills to do it himself. But Clairvoyance just handled that for him. They conversed, and they iterated, and then programs were made.
On last week’s Windows Weekly, Leo said something very similar. Leo does have the programming background and skills to write his own code. But as Anthropic Claude, in his case, had improved, he began leaning more and more on the AI to “just do it,” so to speak. And he quickly went from cute little command line apps to more complex workflows that bridge physical devices like the Sonos-compatible speakers in his home. He only interacts with the AI conversationally now. He can look at the underlying source code that’s created. But he doesn’t have to.
A year ago, maybe, but two years ago definitely, the conversation around AI would always go like this: Yes, this is incredible, and I would love to integrate whatever capability into my workflow, but the caveat is that this assumes we can trust the thing. Critics and deniers continue to cite “hallucinations”–we all love that word–as the blocker. But as is so often the case, the people who feel most negatively about something are the same ones who haven’t used that thing recently. And AI is moving quickly. This problem is being solved.
When OpenAI co-founder Andrej Karpathy coined the term “vibe coding” a bit over a year ago, everyone misunderstood what he meant. He was describing a process by which a professional developer would use AI conversationally to create something, but with the caveat that they would still need to review the code, fix the inevitable bugs, and understand what was happening. This wasn’t something normal, mainstream users could do to any level of success because they would quickly run into coding errors they would never understand.
But as Brad and Clairvoyance show us, that’s changed. And as with terms like “ironic,” which came to mean what so many people thought it always meant after a pop song got that hilariously wrong, vibe coding evolved to the point where normal, mainstream users can use it. And that capability, that power, is only going to get better. Not over a long time, but quickly. It’s happening as you read this.
Vibe coding is not the right term, but we’re stuck with it, to some degree, so perhaps we could evolve this to by vibe creation or vibe making. If you can describe it, you can make it.
I was discussing this with my wife last night, and I used an example like a note-taking or to-do list app, where you try all the available choices and none are quite right. What if you could simply turn to an AI and solve this problem, either by blending features from two apps to create that perfect “Goldilocks” solution, or just creating a bare-bones app that only does what you need? And then maybe you would evolve that over time, iterating it with the AI, as you used it and figured out a few other things you’d like.
This is what’s happening right now, literally. And this morning, I was amused to see a story in my feed by writer and colleague Harry McCracken, describing how he had just created a vibe-coded word processing app for himself that does only exactly what he needs. Like me, Harry came to understand that Microsoft Word, while powerful, is full of thousands of features he’ll never use or need. And like me, he’s long wanted something simpler and less distracting. So he made it. He previously made a similarly personal note-taking app.
This is what Richard Campbell calls bespoke software. It’s artisanal and personal and specific to the needs of one person. There are ways to share these things with the world, of course, and one might find catalogs of starter projects that they might then iterate and personalize one with their AI, to make it just right for them. This shift will change software forever. And it will change App Stores, which explains why Apple is so strenuously focused on removing vibe coding solutions from its App Store.
But the bigger societal impact of this shift is that thing that I used to always give Microsoft credit for, taking a technology away from those white lab-coated elites and giving it Prometheus-like to the unwashed masses. The unwashed masses don’t know how to code. And now they never have to learn, just like Firefly AI Assistant users will never need to learn the complexities of Photoshop or Premiere in the near future. Anyone, not just the experts, will be able to make, to create. It’s happening right now. It’s all around us.
Just this week, OpenAI announced a major update to Codex that brings its software development generative capabilities to productivity, similar to how Anthropic evolved Claude Code into Claude Cowork. Perplexity announced Personal Computer for the Mac, which among other things lets you interact with local apps and files, remotely via a phone, similar to Claude Cowork and Dispatch. Anthropic announced Claude Opus 4.7, which creates “more tasteful and creative” content when completing professional tasks, and, separately, a Claude app update specifically designed for parallel agent usage that features a user experience I will refer to again quickly below. Google announced a new Android CLI that lets you describe an app using natural language and, separately, a way for Gemini to turn workflows into skills you can share with others. All this in addition to Adobe Firefly AI Assistant. And all in just one week. And I’m probably missing some announcements. Heck, Microsoft vice president Scott Hanselman has vibe coded about 300 apps and services so far, over 50 of which are Windows apps.
So the notion that we will in a Matrix-like way soon be able to make anything we want to make is not far-fetched. It’s happening. It has, in many cases, already happened.
Here’s a throwback for you.
When Microsoft jumpstarted this AI era by unleashing what we now call Copilot on the world, I would repeatedly reference an incredible moment at Build 2023 in which Stevie Batiche described how Microsoft would implement AI across its stack, primarily via productivity apps and services. There would be an evolution across three application structures, as he called them: AI beside, which is the Copilot model where AI features are added to existing, legacy apps; AI inside, in which AI features would more naturally be integrated into existing and new apps; and AI outside, where our technology infrastructure has been completely transformed and an AI orchestrator takes our intent and then uses whatever app, service, and AI features to just do that thing.
That’s what’s happening right now. AI outside. Or, as we should probably call it, AI everywhere.
This is why apps are becoming semantic, so that their individual features can be accessed without a human needing to run those apps, find the exact user interface, and master whatever skills. Those features will be controlled by AI, accessed behind the scenes. Those apps will, over time, disappear. And we will get things done in very different ways than we have been doing over the past two to four decades. Some are already doing this. That’s what Brad’s and Leo’s experiences are: They have conversations with AI, describe what they want, review the work as it happens, and then sign off when it’s done.
That’s some people, now. But what does this look like when it’s everyone, all the time?
This is still unclear. But I mentioned the Claude app, which was recently updated to support multiple agents running in parallel. That sounds boring or technical, I guess. But have you seen this app update? In short, Claude used to look like every other chatbot, with a big text box for typing a prompt, some related filters and controls (Write, Learn, Code, etc.) and a collapsible sidebar where you could access your chat history. At a high level, it still looks like that. But now, that sidebar has toggles for switching between chatting, coding, and coworking. There are projects, and something called artifacts, because AI, for some reason, is all about new language. This thing is turning into … wait for it … Outlook.
Yes, Microsoft Outlook.
Which on one level could be described functionally, a place for you to manage email, contacts, and calendar. But really, for 20 or 30 years, Outlook has been where a billion-ish knowledge workers manage their work days, Monday through Friday. It’s been the center of those work days. It’s what Teams or Slack is, sort of, to a more recent generation. And it is, I think, where AI is going.
That is, instead of sitting down in front of Word to write, Excel to calculate and visualize, and Visual Studio to write code, one might instead sit down in front of whatever AI to do all those things and more. The interaction will be conversational, which can be typing or literally speaking. It will extend beyond a computer and be on the web and on our phones so we and the AI can interact with us no matter where we are. We might set if off on some task on the PC, go out to dinner, and it will alert us via a phone-based notification for whatever reason. Satya Nadella once said that Copilot was like the new Start button and we laughed. I now think he meant that literally.
Here’s the thing. This is exciting. This is the literal answer to the problem I presented in From the Editor’s Desk: What We’ve Lost ⭐. It’s a reason to be optimistic for the future because we will save time, money, and drudgery. It’s how we can get caught up in the excitement of creating, or making, because those capabilities have expanded exponentially and the only limit, really, is our own imagination. This is the modern version of me seeing Commodore computers in a Sears in the late 1970s or early 1980s and instantly imagining what I could make with such a computer. What will we make now? What will you make? The capabilities are breathtaking.
And what will life be like when those creation capabilities extend to the physical world embodied by Raspberry Pi, 3D printers, and the whole maker economy? People are already creating precision replacement parts for antique cars on their own. We’re on the cusp of true disruption everywhere.
I don’t use AI all that much, still. I did use it for that coding project, and that went well. But there is so much more promise and opportunity out there. And the logical next steps would be to figure out how AI might save me time, money, and drudgery at work and in my personal life. It’s a wide open field, and it’s all there for the taking. I will try, take some baby steps. Half the reason I discussed this with my wife is that I figured she, as a normal person, would immediately seize on some use cases that wouldn’t have occurred to me. So we’ll see. Because this is truly exciting. And I want it to happen.
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.