Why Apple’s M1 SoC is so Fast

37

Great write-up on why Apple’s M1 is so fast and Intel / AMD will have a difficult (albeit not impossible) time catching up

“Here we get a big problem with the Intel and AMD business model. Their business models are based on selling general purpose CPUs, which people just slot onto a large PC motherboard. Thus computer makers can simply buy motherboards, memory, CPUs and graphics cards from different vendors and integrate them into one solution.”

“But we are quickly moving away from that world. In the new SoC world you don’t assemble physical components from different vendors. Instead you assemble IP (intellectual property) from different vendors. You buy the design for graphics cards, CPUs, modems, IO controllers and other things from different vendors and use that to design a SoC in-house. Then you get a foundry to manufacture this.”

(The entire article is worth the read)

Why is Apple’s M1 Chip So Fast?. Real world experience with the new M1… | by Erik Engheim | Nov, 2020 | Medium

Comments (37)

37 responses to “Why Apple’s M1 SoC is so Fast”

  1. shark47

    It's interesting, but I think the implications for the PC world are pretty simplistic. Clearly he understands the technical aspects, but is a little naive about the industry itself. The PC industry will catch up or get pretty close. It's a question of when, not if. The only question is, how much marketshare will they lose during that process.

    • wright_is

      In reply to shark47:

      The real question is, how will the M1 successor work with a Mac Pro, where you can plug in large amounts of memory, different disk controllers, different graphic cards and other specialised hardware for specific jobs?

      At the moment, everything is on that one M1 SoC, including memory. That makes the whole system very limiting.

      I needed more memory on my ThinkPad, so I just plugged in more memory. You can't do that with a MacBook Pro. The same for storage.

      On my desktop, the 32GB RAM wasn't enough, so I boosted it to 64GB, when I could afford it. The same with the processor, if I need more power, I can unplug the current one and plug in a more powerful one.

      On a Mac Pro, if 16GB isn't enough, you can plug in an additional couple of hundred GB of RAM.

      With the M1 design, you would need to "throw away" 90% of the PC to replace it with a new SoC with more memory or more processing power.

      That is wasteful and expensive. For a sealed product with a limited life span (E.g. smartphone at the current time), it is a useful compromise. For a device that has a long working life, that design provides initial performance, but makes it uneconomical to keep it going by boosting memory or adding more storage.

      Don't get me wrong, I'm very impressed with what Apple has achieved with the M1, but it makes less sense in a MacBook than a phone and even less sense in a real desktop PC. I'll be looking to see how Apple overcomes the shortcoming of the M1 design, when it comes to a "real" computer in the next couple of years. I think that is where the proof of the pudding will be.

      • illuminated

        In reply to wright_is:

        Chip with integrated everything is very Apple. I bet they are looking for ways to integrate battery too. No more upgrades of separate components like RAM or disk. When you want something different then you just throw away the old computer and buy the new one.

      • angusmatheson

        In reply to wright_is:

        i totally agree, but I think you are the minority. I bet not 1 person in 1000 has changed RAM in a desktop, let alone a laptop. In my experience it is even worse. When a laptop gets slow they just throw it away before doing any sort of troubleshooting. Regular people buy a computer, use it. When it gets slow they throw it away (or hopefully sell it, give it away, or recycle it). Computer for normal people do not need to be upgradable. The trade off to make a device user upgradable impose a cost in terms of size and in the case of the M1 it looks like performance that I am sure almost none of the users will ever take advantage of.

      • bkkcanuck

        In reply to wright_is:

        You talk about the shortcomings of the M1 design, but the design is exactly as it needs to be for the computers that it was deployed in. All computers it replaced are entry-level with a max of 16GB only for those models. As such the design is optimized for those entry-level machines. It was not designed for the higher end machines, which is why it was not deployed in those machines. The designing of the SoC to be optimized for the device it is being deployed in is a strength not a weakness in my opinion. This machine is democratising in a way since it brings the power for doing much more than the entry-level machines before it and those that were it's target audience were not the ones that would have opted for more memory etc. anyway. The only reason that power users are even paying attention to the entry-level machines .... is because the other machines are yet to come. I expect 3 or 4 basic designs for chips for all of the Mac line - each optimized for what the computer is designed for -- and that takes time.


        As a user of the Mac Mini 2018, 64GB, making full use of the 4 Thunderbolt ports - I will of course wait for a machine that is designed for my use case. I am not disappointed by the limitations of the M1, since it was not designed for my use cases. I have no problem with the memory not being upgraded on any consumer oriented machines (including all laptops) since I know what my use case is... and buy it for that purpose. If things change, I upgrade by selling my existing computer and buying a replacement for it (though I am unlikely to need to do so). There are existing Mac Pros out there that make use of large amounts of memory, Apple now has a team focused on delivering systems for those pros (a change from earlier) and I am sure they are well aware of the amount of memory used by those high end machines.


        What I am surprised about is Logic Pro X memory usage (one of the large memory footprint use cases) - which can be large.... I am absolutely flabbergasted that even with the 16GB limitations - it is handling lots of channels and plugins (way more than the earlier model of the same limitation). [I know some pros that are running 192GB and 384GB for audio].


        There are of course multiple professional uses for large memory addresses. I am not one of those, the worry for me is if I did buy it though ... I would feel obligated to max it out :o

        • wright_is

          In reply to bkkcanuck:

          I agree with you, up to a point.

          The Intel Mac can be upgraded. Buy it with 8GB and later decide you need 64GB, you just need to replace the RAM, not throw out a perfectly good PC, just to get more RAM. That is what I find disturbing.

          We have gone from a society that valued sustainability, even if they didn't know what that meant, they built things to last, to be repaired. We have regressed to a society that builds things to fall apart or fail after a short period of time and to be disposable, instead of upgradeable.

          Don't get me wrong, I am very impressed with what Apple has managed to do, to get performance out of the new chips. But they have sacrificed sustainability in order to do it. I'd prefer it to be a couple of nano-seconds slower here-and-there, in order to make it more long lived and versatile, so that we don't waste as many resources.

          The same thing that annoyed me with my old iMac. The logic board eventually crapped out (about 5 years after Apple abandoned it, from an OS point of view, it was running Linux), but the monitor was still working fine, but, because of the integrated design, I had to throw out a perfectly functioning monitor.

          This isn't just an Apple problem, this is a general problem with society. And smartphones and a lot of new laptops aren't any better, even those from other manufacturers. I try and source my products from manufacturers that allow the most flexibility and repairability, which is why Apple annoys me, with the M1, I'd love to get a new Apple Silicon Mac to play with, but the design is too wasteful.

          • bkkcanuck

            In reply to wright_is:


            The Mac Mini is only the low-end Mac Mini, they are continuing to sell the higher end upgradeable Intel Mac Minis (space grey) which are upgradeable.


            New mac Minis - aluminum color = Entry Level (no upgradeable memory, no 10GB ethernet, 2TB ports) - this was likely released at this point as a replacement for the DTK

            Intel Mac Minis - space gray = 'Pro' end (used in small render farms etc.) have 10GB ethernet option, 4TB ports, and expandable memory.


            There are offsetting interests at play (energy required for old hardware, failure rate as a whole vs failure rate of components, etc.) making boards with more solder points etc. will mean a higher failure rate (more raw materials etc. - larger boards etc.). Memory by itself typically has a lifetime warranty - because it does not generally fail. Lower failure rate, means lower warranty costs for Apple and less machines ending in the landfill early. There is a separate issue with batteries, and that may be an area where legislation is required... i.e. batteries have to be able to be replaced by the manufacturer long after it is designated obsolete.


            A machine with 8GB, 16GB can be repurposed, handed off, sold - so memory upgrades by selling and purchasing a new machine for these low end machines should not result in landfill unless the owners are stupid.


            Having the memory on package which it is now on the SoC means the memory access can be more performant (lower latency), more efficient, etc. (this is the same tradeoff that is made in the new game consoles for improving performance). The memory access (both graphics and CPU) which means less passing stuff around through the PCIe bus etc. as both graphics and CPU can access it directly, I think also the built in SSD access is done this way as well. The Tile based rendering also has efficiency advantages as well that limits redundant passing information around. Basically, the trade-offs are what are resulting in some of that fantastic performance you are getting in a very low power efficient computer. The end result is much better user experience.


            "The same thing that annoyed me with my old iMac. The logic board eventually crapped out (about 5 years after Apple abandoned it, from an OS point of view, it was running Linux), but the monitor was still working fine, but, because of the integrated design, I had to throw out a perfectly functioning monitor."


            They use to have target mode (I think that is what it was called) but at higher resolutions and because of issues doing custom implementations to get around Intel chip limitations - this was removed. There is a hope by some of this that because these workarounds will no longer be necessary that target mode will make it's return.


            I am curious as to how Apple will handle larger memory models - if there will be a mix of on-package and upgradeable and how they blend the memory access together. (trying to get the best of both worlds).

          • shameer_mulji

            In reply to wright_is:

            "I'd love to get a new Apple Silicon Mac to play with"


            Get the base M1 Mac Mini and upgrade it to 16GB RAM. With the performance of the M1 and Apple's track record of supporting devices, you should get 4 to 5 years of good life out of that no problem. It's the cheapest / most effective way to get into an M1 Mac ecosystem.

        • DBSync

          In reply to bkkcanuck:

          The Thunderbolt ports support Thunderbolt hubs. So you can get more Thunderbolt ports if you need.

          • bkkcanuck

            In reply to DBSync:

            Limited bandwidth, I have a high speed thunderbolt SSD, 3 x 4K monitors, thunderbolt hard drive enclosure, an eGPU enclosure (the enclosure could be repurposed for PCIe storage as well) and then there are the USB C devices such as an external audio interface for dynamic mics and headphones, camera, etc, etc.

  2. shark47

    In reply to lvthunder:

    The funny thing is, Microsoft can probably design their own chip, similar to the way they designed their own PCs, but that would piss off Intel and AMD.

  3. trevorcurtis

    It's the 5nm process. Plain and simple. Apple adds a lot of extra stuff to the SoC, but Intel does that too at larger die sizes. Adding memory and specialized processors is not new. Apple is benefiting from the smaller TSMC manufacturing process of 5nm and the fact that they can pack all of it in with reduced need for cooling and lower power requirements.

    • bkkcanuck

      In reply to trevorcurtis:

      It is more than just the 5nm process. 7nm to 5nm will improve performance or thermals by maybe 20% at most for one or the other. The x86 instruction set (CISC) is translated to RISC like micro-operations before processing, ARM is just RISC. This needs to be done because large instructions (CISC) can take 1 or 2 clock cycles or much more - which makes it impossible to optimize when it comes to predictive branching and running instructions in parallel pipelines (i.e. CISC instructions would block one pipeline for a long while causing traffic jams). I suspect this hardware decoder also has a level of inefficiency built into it. Intel was able to hide there inefficiency because their fabs were superior, but now that they have fallen behind it this architectural legacy becomes more noticeable. Moving early to a 5nm process has a downside in that the chip size early on is not really optimal for larger chip designs as the reject rate is too high (this is why Apple tended to use Intel chips a year or so after they first appeared - because they relied on Iris graphics and those chips had a larger die size so they would only be moved later in the cycle to a smaller process).


      Other companies could have moved to 5nm, but moving earlier is more expensive... and I believe Apple gets it early because they reduce the risk to TSMC by offsetting the retooling costs for the fabs pre-production.


        • shameer_mulji

          In reply to shark47:

          Only for low-power (10w to 28w) laptops. 10nm processors for 45W laptops & servers will coming out 1H of next year while 10nm desktop processors are scheduled for 2H of next year. So currently, there's nothing shipping in high-volume with respect to 10nm processors.

        • bkkcanuck

          In reply to shark47:

          "Isn’t Intel at 10nm?"


          Intel is not even meeting targets for 10nm at this point (which is supposedly equivalent to TSMC 7nm). Intel even downplayed the move from 14nm to 10nm (1st generation) as providing little real world benefit over their 14nm though. This is likely due to having a high defect rate making only the small sized die being economical. It was bad enough that Intel began backporting some of their 10nm implementations to the 14nm process (at considerable cost - enough that it impacts the financials). Intel then published there roadmap as being back on track (though behind by a year), and then promptly had to backtrack on that roadmap as it became clear that they were still having major issues. TSMC however his just knocking it out of the park right now with their targets (which in themselves are aggressive). The problems are so severe at this point -- there is chatter that some are talking of Intel having to actually start fabbing their chips at a 3rd party for core chips (Intel does use 3rd party fabs for some old processes on support chips -- but not the anything core to their business).

          • wright_is

            In reply to bkkcanuck:

            Exactly, And AMD has been at 7nm for a couple of generations of Ryzen and Epyc now and are looking to move to 5nm.

            5nm certainly brings something to the mix, but it is too simplistic to say that it is the only reason why the M1 is so fast, as the OP suggests.

  4. wright_is

    In reply to lvthunder:

    But, why should I sell my old device at a huge loss for, what, $80 of extra RAM?

    Our consumption of resources is already out of control, without artificially obsoleting perfectly functional tech for a couple of dollars worth of upgrade.

    Humanity seems to be regressing at the moment, we used to make things that lasted, you bought it when you needed it and, often, handed it on to your children. Now everything is designed to be binned or to fall apart after a couple of years.

    Obviously, there are exceptions, like a rapidly expanding area of technology, where things change quickly, but PCs have slowed down to the level of white goods, so a general PC is good for 10 years these days - my 2010 Sony laptop is still more than fast enough for general usage, obviously it isn't fast enough for complex photo editing or heavy duty compiling, but for general use or a simple home web server, it is still more than enough, or it was once I replaced the hard drive with an SSD.

    This isn't just an Apple problem and I'm not holding Apple solely responsible, but it is a key example of this wasteful attitude to natural resources.

    • longhorn

      In reply to wright_is:

      It does seem Apple has some kind of recycle program (and you get paid something for your non-supported device when you buy a new Apple device). I assume Apple doesn't ship these devices to a landfill in Africa, but actually recycle the material.


      I think computers - even desktop computers - have moved into the device category. The SoC is pretty environmental friendly when you think about how little material is used to create the CPU, the GPU, the RAM, co-processors etc.


      What if Apple can make a powerful ARM Mac Pro that only weighs 15 % of the former Intel Mac Pro? That would be pretty good for the environment.


      The latest Intel Mac Pro is a beast of a machine, but also a machine that basically no one needs. It's better to offload such tasks to a "render farm"/cloud where a single computer isn't a bottleneck for the whole process.


  5. shark47

    In reply to lvthunder:

    I don't know. I can't think of what else could be holding them back. They definitely have the resources. Probably because they need them?

  6. shark47

    Imagine when the M2 comes out, if Apple starts selling a cheaper version of the Air with the M1 chip.

  7. waethorn

    In reply to lvthunder:

    So you're trying to say that Apple having their chip design in house has nothing to do with their performance optimizations between their hardware and software? Wow.


    Microsoft already worked with Snapdragon, several times. And look at the performance they got from them. They also worked with Intel on integration in the Surface. Microsoft said "power management is a hard computer engineering problem" when the Surface Pro 4 (?) shipped.

    • shark47

      In reply to Waethorn:

      The Snapdragon is not a great example, given that Qualcomm took a mobile chip and tried to use it on the PC. They probably work better on Chromebooks. Not sure.


      In any way, in each case, Microsoft has used existing chips with a few minor tweaks here and there. I do wonder if Microsoft had worked with AMD instead of Intel on the Surface line, if things would be different.

  8. wright_is

    In reply to Jeffsters:

    My first sentence asked what comes next... Not sure how asking what the successor to the M1 will bring means that I don't assume there will be a successor??

  9. txag

    In reply to lvthunder:

    Intel isn’t exactly burning up the world with their innovation right now. Maybe they will surprise me.

  10. txag

    It will be hard for Windows computers to match Apple, because nobody making Windows computers has chip design in house.

  11. waethorn

    In reply to lvthunder:

    Intel doesn't make the operating system.

  12. codymesh

    I disagree with this article. Maybe x86 and Intel can't do it, but Qualcomm absolutely can assemble IP from different vendors, and we should expect them to be able to deliver what Apple has been able to. The real business model problem is that Qualcomm is only interested in demonstrating performance leadership on phones, not PCs or anywhere else. Qualcomm basically killed smartwatches in this way too.


    Apple's approach to ARM isn't special. It just highlights complacency from the competition, and it certainly should be an embarrassment that Apple beat all of them in just a decade.

Leave a Reply