Adobe Illustrator and InDesign Go Native on Apple Silicon

Posted on June 8, 2021 by Paul Thurrott in Mac and macOS with 21 Comments

Adobe announced today that it has successfully ported Illustrator and InDesign to Apple Silicon for native compatibility with M1-based Macs.

“With so many designers around the world relying on Illustrator and InDesign every day to help them create and express themselves, we know speed and performance are key,” Adobe’s Jasmine Whitaker notes. “With the launch of Apple’s new line of Macs and Macbooks, running on the [Apple] Silicon M1 chip, we made it a top priority to optimize all Creative Cloud apps—Illustrator and InDesign included—to run seamlessly in this new environment.”

The release of Apple Silicon-native versions of Illustrator and InDesign follow Adobe’s March release of an M1-native version of Photoshop. And in each case, the firm is citing major performance benefits from the transition. Adobe claims that Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus the Intel version. And with InDesign, users will see a 59 percent improvement in overall performance on Apple Silicon.

Some particular actions are even faster, Adobe claims. Opening a graphics-heavy file in InDesign, for example, is now 185 percent faster, and scroll performance on a text-heavy document of 100 pages improved 78 percent. Some Illustrator actions see similar performance improvements.

Adobe says the M1-native versions of Illustrator and InDesign will begin rolling out to customers today and will be available to all customers worldwide soon.


Tagged with , ,

Join the discussion!


Don't have a login but want to join the conversation? Become a Thurrott Premium or Basic User to participate

Comments (21)

21 responses to “Adobe Illustrator and InDesign Go Native on Apple Silicon”

  1. thejoefin

    Seeing how fast big companies support M1 should be a case study on how to attract developers to a platform. Turns out when you build a great product people want to support it.

    • bkkcanuck

      It is more about Mac units that will be running M1. If you consider that soon, every single Mac that is going to be sold going forward (except for maybe one more Mac Pro refresh) is going to be M1. Rosetta will likely be removed in a few years. All apps that will want to continue to run will have to be ported over to Apple silicon. The average number of units sold vary I think between 15 to 20 million (maybe more now) per year.... That is a sufficient market to write applications for. On top of that with the unification of the development platform -- add around 45 million units per year. If you want to continue to sell software for Macs, you are going to prioritize porting your applications to be native (which Apple has done an excellent job of making as easy as possible). It also can make sense to unify the development of both iPad and macOS development (optimized for both UI front-ends) as that is where the market is moving (as seen by Apple working on now moving all platforms with new features in unison where it makes sense). I would also guess, that even with the good selection of built in functionality, that macOS users still spend more money on software than on Windows [for home users, macOS still is not a big factor in business]. Simply put, the lack of all in commitment to the platform by Microsoft -- makes it less attractive and more companies will do a wait and see approach (chicken and egg).

    • wright_is

      More like, "it turns out, when you tell people that their gravy train is being de-railed and they have to jump on the new bandwagon to continue earning, people jump on the bandwagon."

      Mac developers don't really have a choice. And, luckily for them, they already went through a code rationalisation with the move from PowerPC to Intel a decade and a half ago.

      On Windows, Microsoft just don't have that much power, because business users won't jump to a new platform, just because Microsoft tells them to. You aren't going to throw out 20 - 30,000 PCs and replace them with ARM, just because Microsoft says so. You will look at a gradual replacement over the next 10 years and you won't start moving, until all the critical LoB software is available. Given that a lot of LoB software hasn't been supported for donkey's years and the developers have disappeared, that is unlikely to happen in a lot of cases.

      Also, instead of having to refactor their software a decade or so ago, most haven't had to refactor the code for over 30 years. That is a lot of hard work, especially as a lot of antique code is probably in assembler or C.

      That was one of the reasons MS Office on the Mac took years to get from PowerPC to Intel, being stuck in Rosetta hell "forever". Many of the functions in Excel had been written in PowerPC assembler and the whole Excel engine had to be re-written for Intel OS X. That is a major undertaking.

      I'm assuming that Microsoft (and others) didn't want to get caught with their trousers around their ankles a second time and took the opportunity to write good, managed code, which is one of the reasons why many developers have managed to move so quickly this time around.

  2. waethorn

    Nothing from Autodesk yet?

    • lvthunder

      The Mac is such a small portion of Autodesk's business it's not surprising.

      • locust_infested_orchard_inc.

        Autodesk's software that include Flame, Flare, Lustre, and Smoke, are Mac-only applications not available for Windows (but available for Linux).

        No doubt Autodesk will port these applications for the M1 chip once they realise Apple's silicon is ?ing all over Intel.

  3. midpacific

    The only way these gains are real is if the M1 tdp is equally higher in these use cases than whatever intel cpu it is compared to. M1/Arm is still more efficient in terms of throttling/standby modes but it can't magically be faster processing for same wattage.

    • midpacific

      Hmm...seems like "tweaks" for certain workloads would be even more mature on Intel so really makes no sense that new M1 tweaks would be faster. I mean developers have wrung everything they can out of intel architecture. I imagine that overall experience on current M1 will be no better than intel and mostly worse.

      • wright_is

        No, Intel is the other way round, more or less. Intel design general purpose processors for themselves and the operating systems have to be tweaked to work with them. The same has been true with all general purpose CPUs. They are designed to be general purpose (therefore mass produced and (relatively speaking) cheap) and the operating systems have to be tailored to their strengths and weaknesses.

        There have been some amendments to the Intel specifications over the years for multimedia extensions (MMX/MMX2, AVX etc.), but generally speaking, the x86/x64 CPUs remain general purpsoe.

        Also, part of the performance of the M1 comes from including RAM and core storage on the same silicon as the processing units. That reduces latency, but you are then stuck with a specific memory size and storage designed into their chips.

      • Oreo

        Why are you doubting what has been confirmed many times by independent benchmarks? Have a look at Anandtech’s benchmarks of the M1 and the closely related A14 ( They explain *why* Apple’s cores are faster than Intel ( For example, they are much, much wider (the big cores on the M1/A14 are 8-wide whereas Intel’s is 5-wide) and have much larger caches (192 kB L1 cache vs. 48 kB L1 cache). Put another way, Apple’s cores resemble more Power server chips in these aspects than Intel chips. Furthermore, Apple has built in hardware functions to accelerate e. g. certain Java functions and low-level things like reference counting. So no, it isn’t magic, just good design and tight integration of hardware and software.

    • bettyblue

      I read a great article that interviewed a Apple engineer and there was some great examples on how they cut down so many processes in the OS that just make things faster because they could design the chip to work with the OS better. Just lots and lots of little tweaks that make it super efficient at handling code and memory.

      This is something, right now that only Apple can do. Apple could NOT do it with Intel and neither can Microsoft with Intel or AMD.

      What is scary, or should be for the PC makers and Microsoft, is that right now the M1 Mac's that are available and out in the wild....are going to be slowest performing Apple silicon Mac's ever. The chips have 8 core CPU's and 8 core GPU's. The are all kinds of rumors of the M1X or M2 with 12-16 CPU cores and even more GPU cores. Once Premiere is ported it will be interesting to see where Apple is at with the M series and how much a new 32inch iMac with a 16 core CPU/32 core GPU M2 crushes a Windows PC with some Intel/AMD setup.

      • F4IL

        > What is scary, or should be for the PC makers and Microsoft, is that right now the M1 Mac's that are available and out in the wild....are going to be slowest performing Apple silicon Mac's ever.

        I believe the performance gap they have now (with the M1) is as big as it gets. Apple are many process nodes ahead of Intel and unfortunately they can do very little to widen the gap. At some point process nodes hit a brick wall and afaik there is currently no way to go bellow 1nm.

        • bluvg

          People also often don't understand that Intel's nm =/= TSMS nm. Transistor density would be a better measure. The same process isn't necessarily used throughout a chip, either.

          Reports suggest Alder Lake will be Intel's turning point, similar to how Conroe/Core washed away the Pentium stink back in the day. They are not standing still.

          • locust_infested_orchard_inc.

            You are correct in stating most people are misinformed about the process node as an indicator to faster and better performance.

            Furthermore, most fabs are disingenuous when stating their current process node as they measure it more conservatively than Intel. The 10 nm process node of Intel is therefore more in line with TSMC's 7 nm.

            Transistor density is indeed a better measure of chip shrinkage advancement. The link below depicts the transistor density of the various process nodes amongst the leading fabs.

            This image was sourced from Anandtech.


      • bluvg

        So now that they're on their own chips, they're engineering out portability?

        A design goal of NT was portability, but I've never seen any stats around potential perf impact, or how much that still factors in now vs. its early development, and how much chip influenced OS influenced chip in the Wintel world similarly.

        In a recent interview with Jim Keller, he at one point criticized x86 as ancient (Intel itself tried to break free with Itanium, of course), but then later dismissed the difference between ISAs as relatively inconsequential, chip design holistically being far more important. Apple in that regard does have a definite advantage of a much smaller target than the x86/x64 market.

    • bluvg

      Different ISA and chip architecture, though. Many benchmarks do seem to show the M1 not only outperforming or comparing well against both Intel and AMD, but doing it at significantly lower power draw.

      • wright_is

        Yes, it is a trade-off. Because the memory and storage are integrated into the core silicon, the latency is much lower. This means that tasks that fit into main memory and on the main storage have the potential to be much faster than equivalent operations on Intel and AMD, where the memory and storage are accessed over a conventional bus.

        That also means that those M1 chips are restricted to the amount of memory they can use and any data on external storage will slow down tasks. If everything you do can be done within those restrictions, an integrated design, like the M1, is going to be much faster.

        If at some point in the future, your application needs more memory or you need large amounts of external storage, you are out of luck.

        For most users, that will be a net gain that existing Intel and AMD packages can't compete with. If, on the other hand, you are a power user, with a Xeon based Mac Pro and bucket loads of RAM (64GB, 256GB or more) that is actually used, and large RAID arrays of raw data, the M1 won't be such a boon.

        Both designs have their advantages and disadvantages. As I needed more RAM for my desktop PC, I just doubled it up to 64GB. That isn't an option for an M1 user. It looks like some ARM applications are also more memory efficient in macOS, compared to their Intel cousins, so that might help, say in Photoshop, but where the extra RAM is really needed, those applications will remain Intel, for now.

        That is why I am very interested to see what Apple has in store for Pro users. The current M1 design philosophy of RAM and storage integrated into the core silicon won't work, when you really need 256GB RAM for your model...

        • bkkcanuck

          I am guessing the first 'Mac Pro Mini' (I think the Mac Pro is still the future) will be based on the M1X / M2X chipset and will likely be able to support up to 256GB on package (not on die - M1 is on package). There are lots of questions that I have in my own mind about how to go higher than that (i.e. mix of different latency possible?, DDR5 allows up to 64Gbit density and die stacking of up to 8, so there are already things that can be done [HBM2 not required as far as I can tell since graphics will continue to be Tile-Based Deferred Rendering (vs Intermediate Mode Rendering used by nVidia and AMD)]. TBDR is much more efficient than IMR (there are benefits to each). Here is my guess as to where Apple will be going for the 'Mac Pro Mini' (don't know if they have plans above 256GB) based purely on existing chip design and code names.

          Code Names are:

          Jade C-Chop (which is suppose to be M1X 10 Core CPU, 16 Core GPU [i.e. chop = half the GPU cores])

          Jade C-Die (M1X 10 Core CPU, 32 Core GPU) [max 64GB RAM]

          Jade 2C-Die ... my guess is it is a chiplet design where effectively 2C = 2 x M1X chiplets - each chiplet managing up to 64GB RAM.... Total 20 Core CPU, 64 Core GPU, 128GB RAM max

          Jade 4C-Die ... similar 4 Chiplets ... Total 40 Core CPU, 128 Core GPU, 256GB RAM max.

          This will of course need some changes to the Operating System to handle architecture in this manner. The question for the future is what happens after 256 GB, will Apple have devices that go higher, will it be able to manage two pools of memory - different latency.

          The benefit of having the volume to go in-house is that they can literally throw specialized silicon into the mix for less than the cost of a Xeon processor that can greatly improve performance for specific workflows.

          Linear graph of M1 @ 8Core GPU and 64Core GPU would be competitive with the top of the line AMD card.... 128 Core would be higher [I believe AMD is working on chiplet designs for graphics which we may see soon], nVidia is working on integrating ARM and there GPU processor into a common SoC for Compute workloads... They all seem to be heading in similar directions as far as I can tell (or guess).

        • Oreo

          You can connect >64 GB via HBM (i. e. the same way RAM is connected to the M1). nVidia is doing that for its compute cards right now (, and their cards support up to 96 GB right now ( Furthermore, Apple could simply use more traditional memory interfaces in some of its models.

          So I don’t think HBM2e memory configurations with up to 128 GB are out of the question, which seems plenty of everything up to and including an iMac Pro. However, if you need much more than that, I think you need a traditional memory interface. I reckon an ARM-based Mac Pro might support DDR4 or perhaps DDR5.

          • wright_is

            That's what I think as well.

            The problem is, when you get to large amounts of RAM, you are having bigger "chips", or rather packages, with more to go wrong, so you will have more wasted packages, which will drive up the unit price disproportionately.

            That is why I am waiting to see how they handle it on professional models, now that the consumer SKUs have been dealt with.

    • lvthunder

      It's not magic, but the time it takes Photoshop to run certain tasks is noticeable faster. It takes half the time to run a Photoshop action I wrote on the same image with my i7 32GB Surface Book 3 and my M1 16GB MacBook Air. I don't remember the numbers exactly but it was something like 90 seconds on my Surface Book 3 and 45 seconds on the Mac.