
Happy Friday! We’re on the cusp of spring; well, you are. It’s always spring here in Mexico City. But let’s kick off the weekend a bit early with an Ask Paul for the ages, a massive missive of messaging. Or something.
“Despite hardware limits, Parallels supports running Windows on MacBook Neo”
Yes. But can it make phone calls?
michaelmdiv asks:
Given the low amount of RAM and the small screen, do you think the Macbook Neo will be a one and done? Maybe Apple just had some phone SoCs lying around they wanted to use up.
Back in November, when rumors of what became the MacBook Neo were first making the rounds, I defended this product because it could revitalize the Mac by being truly competitive with the sweet spot of the PC market. What Apple released was a little disappointing–to me–not because there isn’t enough RAM or whatever but because you can’t get out of that hole. There’s only one MacBook Neo configuration upgrade, which gives you 512 GB of storage and TouchID. But what Neo really needs is a 16 GB of RAM option and keyboard backlight as a standard feature across the board; that’s a $5 part, probably less for Apple, and not even offering it is a crime against humanity.
This morning, Apple CEO Tim Cook tweeted a heavily caveated claim that the MacBook Neo just had the best launch week of any Mac, at least with new Mac users. This shouldn’t surprise anyone, event those who are critical of this device. I’ve long wondered why the Mac has never cracked the 10 percent usage share or market share milestones despite the iPhone as a halo product and all the obvious benefits of the broader ecosystem there. But now we kind of know why: It was the price. The lowest cost Mac for as long as I can remember has been a $999 MacBook Air–$899 in education–and I suspect that letting Walmart sell the old M1 MacBook Air for less than that was an experiment/proof point that lowering the price would really make a difference.
It’s baffling to me that I had to defend the MacBook Neo before it was released and then had to criticize it for its obvious limitations once it was announced, as it feels like I’m confronting the same people in both cases. But the irrational exuberance around this little laptop is interesting. And tied to my remarks about the missing configuration options above, and ignoring the RAM crisis briefly because this will either be an anomaly or the new normal, whatever, there is a big hole, dollar-wise, between the most expensive MacBook Neo ($699) and the least expensive MacBook Air ($1099). And I feel like the MacBook Neo 2 will happen. And it will fill that hole, as it does everywhere else in its product lineup.
A MacBook Neo 2 should have RAM tiers (8 and 16, maybe 24); that’s $100 right there (maybe more). It could have another storage tier, another $100. It could have a bigger screen option, as do all Macs and all iPads, another $100. And someone who purchased a Neo 2 with 16 GB of RAM, 512 GB or 1 TB of storage, and a (let’s call it) 15-inch display would thus pay $899 to $999, closing the gap.
Some have said to me that the “reason” Neo can’t go above 8 GB of RAM is the A18 Pro processor. Which is correct, technically, but misses the point. Apple chose to use that processor. The A19 Pro supports more RAM, and the iPad Air, iPad Pro, iPhone 17 Pro series, and so on all have more than 8 GB of RAM. The Neo could too. And I bet it will in the next release.
The trick for Apple is cannibalization, but this smoother transition between MacBook models–from Neo 2, as I see it, to MacBook Air and then to Pro–lets them hit every segment of the market. And these are customers who also have iPhones, and probably have an iPad, Apple Watch, whatever AirPods, and whatever else, and they are thus probably paying for iCloud+ or a full Apple One subscription and are thus on the hook for a nice monthly outlay. This is just smart. It’s good for Apple. And it’s mostly good for customers, assuming that 8 GB thing and keyboard backlighting don’t ruin the experience, which they do, in my opinion. But again, this will change.
Brad pointed out this morning that the branding is odd here, and that’s a good point. The low-end iPhone is an “e,” the low-end Watch is an SE, and now the low-end MacBook is a Neo. For some reason. (And the low-end iPad is just an iPad. Ah, Apple. Never change.) But whatever. I think its cautious, slow move into a lower market segment is, if anything, overdue. It’s the only place it can see growth now that the rest of the market is saturated. And these low-end products will help it get into emerging markets. In all cases, the idea is to rein them in and then hope that they buy something more expensive next time. Which many will, if the experience is solid.
Anyway. No, I don’t think this is one and done for Neo. I think this is the start of something good for Apple and for customers, that the next one will be even better and will justify the exuberance, and that maybe, just maybe, the Mac will go north of 10 percent usage share and/or market share. It’s overdue.
“Report: Amazon developing new AI-powered mobile phone”
The only thing we want less than this is another Facebook phone

madpapist asks:
Although technology plays vital part in our lives, we don’t often consider how we manage the privacy, financial and practical issues surrounding our eventual demise. Whether it’s ourselves, our spouse, parents or otherwise, it’s not uncommon that the deceased family member is technically astute and the other is what we often describe as a “normal”. If the deceased is also the one that handles the family finances, the issues just compound from there.
Yes. This is a major problem.
Online accounts for banking, insurance, retirement, cloud storage, and social media often come with 2FA (thank goodness), usually linked to a device/phone, etc. that may, or may not be available after someone passes away. Even knowing the website address for any of these endpoints has to be documented and readily available.
As the technology astute (most days) spouse, I’ve just started trying to document a check-list of web addresses, passwords, etc. It’s a huge effort.
Same. I wrote what is still an ongoing series about online services last year and a related and still ongoing series about security earlier this year, and I started an article that I’ve not yet published on this topic. It’s called “Succession,” and it’s a bit complicated because everyone has different sets of online accounts, and each has whatever different methods of authentication and so on.
What I did for my wife and kids was describe the basics of getting into my online life. They know the PINs on each of my phones and PCs, which is kind of the first step. That I use Proton Pass for all my passwords and passkeys, and that’s important. And Proton Authenticator for 2FA. And I’ve given them the password for Pass, because that’s necessary. And where it’s possible, I’ve made one or more of them an emergency contact so that they can just get in themselves as needed. But … it is complicated.
Every situation is different, but wanted to get your thoughts on this overall, or if this subject might be worth an article (or 2) ?
One of the things I continue to struggle with as a writer is that I can almost freeze up on the important topics like this, worried that I will miss something or provide bad information. I also find it too easy to overwrite, and I return to a topic like this, write a bit more, and then just get overwhelmed again and give up for a bit. I do this with relatively unimportant topics, too–the recent phone photography post is an example, where I finally decided to just summarize it quickly instead of really building it out–but this one is a big deal. And I won’t give up on it. I’ll try to get at least that first take out into the world sooner rather than later.
“Amazon’s next-generation AI assistant Alexa+ arrives in the UK”
I am so sorry.
helix2301 asks:
what are your feelings on arm I know you think mac on arm is great you have said many times Apple is the best windows on arm computers.
Well. That was true before Snapdragon X. But now a real Windows on Arm PC is a much better choice. Plus, you can’t dual-boot into Boot Camp on Apple Silicon Macs, so you’re stuck with virtualization, which requires more RAM and storage.
Snapdragon X is a miracle. But Apple Silicon is still “better” overall as a computer chipset–meaning better performance across the board, efficiency/battery life, and reliability–than Snapdragon X, though the coming X2 laptops should close that gap further.
What do you think is best built for windows on arm computer. If someone said to you they want windows on arm machine that was the equivalent of MacBook Pro or Mac Studio what would you recommend.
I have two favorites now, but they’re both going to be out-of-date as soon as the X2 laptops arrive, which will happen in the first half of 2026: The 16-inch HP OmniBook 5, which is a low-end laptop running the entry-level Snapdragon X, and the Microsoft Surface Laptop 15, which comes with higher-end Snapdragon X Elite options.
“Perplexity Health integrates with Apple Health, Fitbit, and other providers”
I can’t wait until they introduce Perplexity Retirement Planning, another service I would never trust this company with

christianwilson asks:
Fun “what if” question, I hope. What would be the dominant OS platform if Microsoft never made an OS and instead made their fame and fortune solely on their software?
I sometimes laugh out loud when it seems like someone is literally reading my mind, and this is one such time. I was just thinking about this. I’ve spent many, many hours reading old industry books and magazines and watching YouTube videos about companies like Commodore, Atari, Digital Research, IBM (in the PC era), and so on as a buildup to a coming monthly focus on retro computing and a continuation of my Tech Nostalgia series. And it is astonishing to me how many what-ifs there are over the first, let’s say, 20 years of the home computer/personal computing era. And this is clearly one of them. A big one.
I may turn that into its own series within a series, or maybe it just becomes a theme of some kind. I also wonder if there’s a book there where perhaps each chapter is basically about something that happened, but it could have gone in a completely different direction. I don’t know.
But regarding your question, the most obvious outcome is that IBM would have done what they planned to do and used CP/M for the first PC. MS-DOS is obviously a CP/M clone and wouldn’t exist if IBM had just signed on with Digital Research (makers of CP/M). Gary Kildall was a much better human being than Bill Gates, so that dynamic would have been interesting. And assuming that the PC still went on the same trajectory, it’s a question whether the falling out between IBM and Microsoft would have happened with Digital Research. I feel that’s unlikely.
The problem once you get past this initial event is that the story branches in so many places that it becomes impossible to predict. Consider, for example, that Windows was a joke until it wasn’t, but much of the industry, including Digital Research (with GEM) and IBM (with TopView and then Presentation Manager), was working on various windowing front-ends for the OSes of the day. And that the eventual success of Windows, which became technically viable with Windows 2.x/286 and 386 and then truly popular with Windows 3.x, impacted those other efforts dramatically. What would the PC have looked like if Windows never happened? It would have gotten one or more GUIs, for sure. But … which one(s)? And at what speed?
A obvious answer could be the Macintosh, of course, but Apple had a lot of challenges beyond Microsoft back then. Is it CP/M? Some other DOS variant? Amiga? One of the Unix platforms?
So many options. Again, I will almost certainly be writing about some of this. But a few quick thoughts:
It goes on and on.
I know it’s an impossible question to answer given how much was happening in the microcomputer space in the 80s and 90s but when I think about the possibilities I find it surprisingly hard to look at what was out there in those formative years and pick and obvious alternative path.
I should mention one more thing, and I feel like this is the center of the story of computing in the late 1970s and 1980s: When Chuck Peddle made the 6501, 6502, and related microprocessors, first while at Motorola and then later independently at MOS Technology, he created something magical. These chips were simplified and dramatically less expensive versions of the Motorola 6800 processor, which was itself superior in key ways to the Intel 8086/8088. The 6502 and its variants formed the basis of most 8-bit microcomputers (home computers) of that era, with the Z80 taking up the rest, but it also powered future game consoles like the NES, Atari Lynx, and many others. This thing had legs.
As with CP/M, which was predominant in more expensive (non-home) microcomputers in the pre-IBM PC era, it’s interesting to wonder what would have happened if 16- and then 32-bit successors to the 6502 could have continued that success. Instead of the Motorola 68000 and its 020, 030, and 040 successors, we might have had a world that was dominated by MOS 650xxx whatever chips. And that company, incidentally, was owned by … wait for it … Commodore.
It never ends. I can really obsess over this.
“Apple’s iOS 26.4 update to include new features”
None of which have the word ‘Siri’ in the title
wright_is asks:
With the insanity of the AI situation at the moment (no RAM for business or consumer customers, storage manufacturers selling their entire annual production to AI hyperscalers etc.) and the estimates for the PC sales sinking by 2 digit percentage points (we are holding off replacing all of our PCs that were due to be replaced this year), aren’t the AI companies squeezing their users out of the market for their products?
I can’t imagine that any of these companies foresaw the impact their rapacious hardware requirements would have on personal computing (and across not just computers, but also phones, tablets, and all kinds of other devices). But I also don’t think they care. They’re building cloud-hosted AI infrastructure, so their offerings don’t require any particular hardware specifications on the client. If their customers pay them $20 to $200 a month for whatever AI chatbot and they use it from a phone or PC, they don’t upgrade as much anymore, great. It doesn’t matter. (To them.)
Most people can’t afford an AI subscription, so they are using loss-leading “free” accounts and the AI companies actions in building out their data centres and servers mean that their customers can’t build housing or commercial space in many areas, because all of the builders have been pulled off normal commercial projects to work on the data centres, and the prices of the devices needed to actually connect to their services are becoming unaffordable.
Yes, there is that.
I’ve started seeing people aping my “I will not pay for AI” bit, usually in the form of “AI is a feature, not a product,” and that’s obviously fair and fine, and perhaps correct. There are some exceptions to this, of course. ChatGPT has had a nice run with individuals paying to use it. Products like Claude Code/Cowork have made inroads here and there. And we’ll see if anyone ever actually figures out agents. But it’s fair to say that most people who “use” AI are not paying for it. And they can round robin between different AIs when they hit whatever monthly limits.
The goal here, I suppose, is to make AI “sticky” enough that people would want to pay for it. And if we’re not upgrading our hardware as often, or not paying for Microsoft 365/Google whatever subscriptions, you could make a case for paying for AI and not paying for the productivity services and using the two together; this would shift the monthly payment from Microsoft or Google to Anthropic, OpenAI, or whatever else. Maybe.
Another way to stickiness, perhaps, is this notion of memory, where an AI that knows you well is more valuable than one that does not. But as with all things AI, the workarounds happen at a stunning speed, and in this case, people have already come up with incredible Markdown-based and other means to transfer memory, essentially, from one AI to another using plain text. This reminds me of the Matrix movies, where someone will call in and request the information they need to fly a helicopter or whatever, and it happens pretty instantly. We live in strange times.
We were paying 760€ for Dell Pro laptops in October last year, now the same laptop is over 1,000€. In the US, one of our users needs a new laptop and there, Lenovo cancelled the order (without informing the supplier), because they couldn’t deliver and the Dell Pro Plus model, which we were paying around 1,000€ for last year now costs well over $2,000.
That’s incredible.
In some cases, the price differential isn’t too horrible if you assume someone will use the PC for x number of years and average it out over that time. The differences between a €760 PC and a €1000 PC over five years, for example, come out to €13 vs €16.67-ish each month. But once you get up into 1.5x, 2x, or whatever change, the math stops making sense.
I mentioned above that I’ve been researching computing in the 1970s to 1990s, basically, and one of the things that comes up repeatedly, oddly enough, are these time frames when certain components were difficult or impossible to get, basically crises similar to what we’re seeing now. In these times, some companies thrived and others struggled or disappeared, but there’s also this through thread of making it work with what you have.
This isn’t hardware related, but I just watched a video about the making of VisiCalc with Dan Bricklin, Bob Frankston, and Mitch Kapor. They had come up in an era of time-sharing and severe compute restrictions, and so when Bricklin and Frankston created VisiCalc, they were hyper-focused on coding it to be as resource-friendly as possible. Kapor, who went on to create Lotus 123, is effusive about how elegant and minimalistic VisiCalc was. Lotus 123 was nothing like that: Where VisiCalc probably ran in 16 KB, his app required a 128 KB IBM PC.
Today, we live in a world where 16 GB of RAM is the only acceptable minimum, but Apple just released an 8 GB MacBook Neo. That’s fine for light use, of course, and for those who only need that bigger screen and keyboard sometimes. But I just saw a headline, something like “The MacBook Neo is the best thing that ever happened to the Mac,” and I knew the point without ever reading the article: Now that there is a mainstream 8 GB Mac in the market, software makers will have to start trying to accommodate that lower amount of RAM, and Apple will need to keep optimizing the OS to be more resource efficient. So we live in a world of plenty, in many ways, but things are more expensive. We all have to do more with less, or do more with the thing we already have for longer.
And on the server side, Lenovo is quoting 180 day delivery timescales, Dell is making offers that are valid for 24 hours (we need to go through an investment process, which takes a week, at least), at least HP is still offering 2 weeks at the moment.
This sort of price gouging makes the Apple MacBook Neo seem attractive, especially if other budget Windows laptops are being similarly affected by the RAM and storage price increases.
Who in their right mind is going to buy new laptops or desktops in this environment, if their existing device doesn’t break? Will we see a consolidation of manufacturers, even more so than we currently have?
I feel like the PC (and probably the Mac) shifted into a longer lifecycle long ago, no doubt triggered by smartphones and, to a lesser degree, the iPad. And so in that sense, this is just more of the same, or at least similar to this situation that already existed. We had that COVID-19 pandemic five years ago, and that reshuffled markets all over the place, as did the post-pandemic resettling. And since I pay attention to this stuff, I look at PC (and other device) sales year-to-year and look for trends. And the big one for the PC, really, is where it plateaus. Barring major events, like a pandemic, the PC should see flat sales at best and probably slight declines, really. But we saw a spike from the pandemic, a fall after it, and then a slow reclimbing. And now this.
So I don’t know. The PC has always had competition. But there is more now than ever, and that competition is better than ever. And I feel like simpler devices, whether it’s an iPad, an Android laptop, or maybe a Chromebook, win out in the end. But that’s long-term. In the short term, it’s all up in the air. The future is not the MacBook Neo, but it’s also not Windows.
And I suspect we will see this also have a similar effect on the tablet and smartphone markets as well. Apple usually orders their memory well in advance, but agreed to a 100% price increase from Samsung for RAM supplies for the iPhone 17 last month, you have to assume that other manufacturers are similarly affected. Where will this end?
I’m trying to rationalize a world in which this doesn’t end. But just as the 6502 (noted above) filled a low-cost need in its era, perhaps we will see new or existing players step in to fill the void caused by Big AI and Big Tech gobbling up all the RAM and other components. This type of thing can’t happen overnight, and I suppose there’s a version of this story where the AI bubble just bursts in the next year, that rapacious need for RAM disappears or lessens, and the market shifts back to something approaching normal again.
But yeah, the big players like Apple and Samsung, I guess, will benefit from this market reality because they can afford to acquire hardware components at volume where smaller players cannot. And we will see that same dynamic we’ve seen again and again where certain players thrive, others struggle, and some–maybe many–disappear.
Food for thought: This is the ideal time for Microsoft to kill Surface, a money-losing business that I do love but has never made sense financially.
“A 32-Year-Old Bug Walks Into A Telnet Server”
Nicely done

jrzoomer asks:
What are your thoughts on NVIDIA DLSS5? And if you could expand on your thoughts on DLSS in general, which initially started with upscaling, onto frame generation, then multi-frame generation, all of which were was initially controversial as well
Last night, my wife and I were watching a video that ended about 15 minutes before we’d normally go to bed. So I decided to show her the Nvidia launch video for DLSS 5 to see what she thought, with the understanding that a) she does not care about technology and, b) had no idea that this thing was in any way controversial. I did tell her that there were various solutions like this all over the place, like AutoSR in Windows 11 on Arm, and that the ability to upscale/improve graphics like this was an incredible advance because it allows one to play games on devices that normally would not be capable of that, or play games at increased fidelity and performance regardless.
She thought it was impressive and agreed that the graphics looked better with DLSS 5 enabled in each example. And then I told her that everyone seems to hate it. And she didn’t quite get it. To be honest, I don’t quite get it either. I suspect part of the problem is that it’s AI, and AI has a poor reputation in certain circles, including, oddly, gaming, where it is already used extensively and will of course be used even more going forward. But it’s more than that. There are obviously some examples of this going south. I enjoyed the line–I forget where I saw this, sorry–“Imagine playing Assassin’s Creed: Shadows … with no shadows,” for example.
If you’re familiar with Pixel phones, you may know that Google has a technology called Super Res Zoom that uses machine learning to fill in the gaps to create an improved image. And that in the latest generation phones, there is something called Pro Zoom (previously called Pro Res Zoom) that uses generative AI to create details in photos. Both are useful and can work well. But in Paul’s Pixel 10 Diaries: Camera Deep Dive ⭐, I show a photo I took of the back of the Siegessäule monument (Victory Column) in Berlin, and Pro Zoom created a face for the statue that shouldn’t be there because I was viewing it from behind. I feel like this is the type of thing DLSS 5 is doing sometimes. It’s not filling in details; it’s creating them. And sometimes it screws up.
The thing is, I still like it. I think the ability to improve a game you already own on hardware you have not upgraded is pretty impressive. I also understand why some players and game makers would be appalled by this. I also think Nvidia CEO Jensen Huang’s response to this criticism–“you’re all completely wrong”–is about as tone-deaf as it can be. And it is perhaps evidence that this guy, who has thus far skated through the AI era with nothing but praise–is just as out to lunch as any other Big Tech executive when it comes to real human beings.
But overall, I have no issues here. It seems like a real advance.
“Gamers are right to be disgusted by NVIDIA’s DLSS 5”
Yeah, we can’t let a day go by without at least some hate
OldITPro2000 asks:
What are your thoughts on the most recent Microsoft AI reorg? I found it confusing, mainly because I apparently didn’t really understand how it was structured in the first place. I thought everything was originally under Mustafa Suleyman entirely to begin with, but it appears that was not the case?
There are two sides to Microsoft’s AI efforts, which sort of mirror some of Microsoft’s other businesses, like Windows and Office, meaning a consumer side and a commercial (business) side. The consumer side has been run by Mustafa Suleyman (ex-DeepMind and Inflection AI), who was also in charge of Microsoft’s in-house LLM efforts, and that business is called Microsoft AI (MAI). The commercial side has been run by Jay Parikh (ex-Facebook), and that’s called Core AI – Platform ad Tools.
(It’s more complicated than that, honestly. But this stuff changes all the time.)
Suleyman is an obvious choice to lead Microsoft’s in-house LLM efforts, so the recently announced changes make sense for him. That said, I initially read this as a demotion of sorts, given that he was no longer doing part of his original job. And so, yes, that was confusing to me as well. Since then, however, I’ve read a few interviews and quotes from him explaining the change, and I guess I believe that this was his plan and preferred outcome. He wants to create Superintelligence, a more human-centric alternative to AGI (Artificial General Intelligence), the stated goal of OpenAI and some others. Microsoft also wants to be clear of the OpenAI drama anchor as quickly as possible too, for obvious reasons.
This change also sees Microsoft consolidating its Copilot businesses, also two sides, consumer and commercial, into a single group. That, too, makes sense. Copilot has failed to resonate with customers across the board, but the confusion of having two Copilot clients in Windows is obvious, and this is perhaps how Copilot should have always been, meaning a single thing. We’ll see what happens there, I don’t know anything about Jacob Andreou beyond a single pertinent fact: He, like Suleyman and Parikh, came from outside Microsoft (Snap, in this case), and it is fascinating to me how many executives who report directly to Satya Nadella are similarly not from within Microsoft. This is a big cultural shift.
Do you give any weight (pun slightly intended) to Suleyman/Microsoft working on their own models? They always had them to some degree but it appears they are doubling-down on them now. I’m sure the recent contract drama with OpenAI and AWS has something to do with it. I feel a divorce is coming soon and it’s going to be ugly.
Yes. I referenced this in my Microsoft Reorgs Its AI Businesses post, and given my comments above about researching the early personal computing market, it’s perhaps no surprise that I compared the OpenAI situation to Microsoft’s divorce from IBM in the early 1990s. That, like the OpenAI partnership, was great for both sides until it wasn’t. And like all divorces, it ended badly.
IBM is a curious company. It didn’t just disappear like some–Sun Microsystems, for example–that were once dominant; indeed, it just earned revenues of almost $20 billion in the most recent quarter. But IBM is also gone from our industry, so to speak; it’s a kind of infrastructure company I can’t really understand and don’t care about. I’m curious if this fate awaits Microsoft–or maybe OpenAI–now. We’ll see.
Microsoft definitely had the talent and the resources to build out its own family of in-house LLMs that will be competitive with whatever Google, Anthropic, OpenAI, and others make. It also has an interesting opportunity to continue partnering with others to offer its customers a choice of LLMs–which should be automated, frankly, so that the best LLM is always used for whatever tasks customers perform–across Microsoft 365 and whatever else. So its success going forward could be tied to either or to both. But I do agree with Nadella that Microsoft needs to make its own models. Microsoft is, above all else, a platform maker. And these models are or will be the foundation for its platforms.
“Microsoft account sign-ins broken by latest Windows 11 update”
Great, now it’s going to start pushing local accounts again
With technology shaping our everyday lives, how could we not dig deeper?
Thurrott Premium delivers an honest and thorough perspective about the technologies we use and rely on everyday. Discover deeper content as a Premium member.