12th-Generation Intel Core Chipsets Come to Mobile PCs

Posted on January 4, 2022 by Paul Thurrott in Hardware, Mobile, Windows 11 with 42 Comments

Intel today announced 28 new 12th-generation Core mobile processors along with an additional 22 desktop processors. The new chips will appear in PCs throughout 2022.

“Intel’s new performance hybrid architecture is helping to accelerate the pace of innovation and the future of compute,” Intel executive vice president and general manager Gregory Bryant said. “And, with the introduction of 12th generation Intel Core mobile processors, we are unlocking new experiences and setting the standard of performance with the fastest processor for a laptop – ever.”

That faster-ever mobile processor is the flagship 12th-generation Intel Core mobile CPU, the Intel Core i9-12900HK. Built on a 7-nm process, the Core i9-12900HK utilizes Intel’s new hybrid architecture, with both performance cores (P-cores) and efficiency cores (E-cores) and intelligent workload prioritization and management distribution. It will be available at frequencies of up to 5 GHz with 14 cores (6 P-cores and 8 E-cores), and it offers a 28 percent performance improvement over its predecessor.

Intel also introduced various 65- and 35-watt 12th-generation Core desktop processors aimed at gaming, creation, and productivity that are available now. And it disclosed its plans for upcoming 12th-generation Core U- and P-series mobile processors that power new ultra-thin-and-light laptops.

 

Tagged with ,

Join the discussion!

BECOME A THURROTT MEMBER:

Don't have a login but want to join the conversation? Become a Thurrott Premium or Basic User to participate

Register
Comments (42)

42 responses to “12th-Generation Intel Core Chipsets Come to Mobile PCs”

  1. VancouverNinja

    K. That was faster than I had expected Intel to exceed Apple's gambit with their own silicon. If Intel continues their gains over the coming years I can only see Apple's move to produce their own processor as a mistake.


    It will be very interesting to watch this play out.

    • rob_segal

      I wouldn't say these exceed Apple's chips in any way quite yet. When it comes to performance per watt, I expect Apple's latest M1 chips to be better than these. Apple using its own chips is not a mistake by any means.

    • lvthunder

      It's never a mistake to be able to control your own destiny. Even if Intel's chips are comparable Apple can still tailor their chips to the exact machine they are building. They do this with hardware acceleration for ProRes now.

      • VancouverNinja

        Not sure I can agree with the "Control Your Own Destiny" reason. There is nothing that Apple PCs can do that any other Windows PC cannot do from technical standpoint - at least that manifests itself to a capability unavailable on WPCs. All Apple has done is now pit itself against the best and brightest talent of the rest of the industry. One must suspend belief to buy into the concept that Apple will be the innovation leader moving forward with their own processors and that remainder of the industry will not be able to exceed Apple. Highly unlikely.


        I mentioned on a post many months back that this effort of Apple could put them at a serious disadvantage many years down the road; with how fast Intel will be able to claim the fastest processors again, it looks to me like Apple may have taken too big of a bite for its own good.


        The main point I am making is Apple could lose the performance reasons to buy their very expensive offerings in the the not too distant future. Or we find that we are nearing a technological limit where producing either RISC or ARM based processors can no longer experience crazy performance leaps over one another for anyone to care. It will be interesting to see if the gamble works out for them.



        • lvthunder

          What I'm saying is if Apple wanted to make a computer just for video editors they could add custom parts to accelerate video like the ProRes hardware support they have in the new M1's. That's not present (to my knowledge) in any of these Intel chips. If they wanted a gaming computer they could take out that ProRes stuff and add more 3D rendering. Stuff like that. I'm not saying Intel couldn't do stuff like that, but they don't seem to be doing that now. They seem to be going the generic route.

        • Saarek

          @VancouverNinja You seem to be assuming as wrote that Intel will suddenly start hitting every mark in their strategy without delay over a number of years. The last 10 years or so of Intel's history indicates that this is a very rocky place to land ones assumtions upon.


          Apple's A Series CPU's have jumped up in performance every single year since their initial release starting with the A4. Yes, the gains have slowed down over the last few years, but still, every single year since 2010 without fail Apple has managed to improve, manufacture and release their improved designs.


          Apple does not have to be the best for everyone on the planet, they just have to offer a compelling product to those that have or are interested in the Mac platform.

        • Greg Green

          You’re not familiar enough with apple devices. Intel couldn’t get the performance apple needed in the spaces apple allowed. And they still can’t. Only arm can deliver the low wattage, low heat, high performance that apple needs in their ultra thin devices.


          apple’s improvements in their chips over the last five years has exceeded intel’s important in chips by a wide margin. And so far alder lake is more power hungry than Ryzen, and far more power hungry than M1 chips. So intel really hasn’t arrived at a solution yet.

    • pecosbob04

      "The new chips will appear in PCs throughout 2022." throughout 2022 seems somewhat imprecise in respect to availability. The M2 chip may or may not be available at some point in 2022. Whether that point is before or after 'throughout' is indeterminant. But more importantly what evidence exists that this chip will be more performant than an M1 MAX in real world workflows? Other than a press release of course.

    • Stabitha.Christie

      I think they mistake here is seeing Apple's move as being motivated by raw processing power. I don't think that was the motivation. When Apple dropped the PowerPC Steve said something to the effect of "we couldn't make the products we could imagine with PowerPC chips". Within 18 months of that move Apple released the MacBook Air which was the thinnest notebook you could get at the time. It wasn't about processing power it was about thermodynamics. Apple couldn't make the MBA with a PowerPC because they were simply too warm. It was a similar motivation to start making the A series. Apple wanted to add features to the iPhone that existing processors couldn't do. I suspect the move to Apple Silicon is similarly motivated. And while they are just getting started look at the new iMac. It's 11.5 mm thick, or as as thick as the Apple Watch. They just couldn't have done the with an Intel processor. So looking at this from simply a processing power standpoint is the wrong lens. Yes, Apple will have to stay competitive on the processing side but huge gains in processing power isn't why Apple is making the move.

      • Donte

        I am a huge Apple fan but you are giving them way too much credit. First and foremost, they are making their own chips now to MAXIMIZE profits. This the Apple way.


        The bought PA semiconductor and ARM license years ago. They will use their financial might to get first options on new silicon from TSMC since they do not make their own chips. The combination of all of this is to cut out anyone else and maximize profits.


        Sure, they have great power performance, they should since they came from the smart phone world into the computing world. Let's see how they do when the start gluing together 4 - M1Max's to achieve better performance in a Mac Pro so they can compete against AMD and Intel at the workstation level for something like Adobe Premiere rendering hours of 4-8K video.


        I am not sure anyone cares how thin the new iMac is. It is running a smartphone CPU in the chin of the device, so it's got lots of room left and right to cool it.



        • Stabitha.Christie

          That is certainly one take.

        • red.radar

          Its possible that both things could be true. They wanted to improve functionality and maximize profits.

        • Greg Green

          Again I don’t think so. The intel self throttling debacle was enough reason to dump intel. They couldn’t do what they needed to do with intel inside, so they dumped intel.

          • james.h.robinson

            But Apple could have gone to AMD, Nvidia, or even Qualcomm when they "dumped" Intel. Instead, Apple opted to increase their economies of scale by having all their devices run on Apple Silicon. To me, the writing was on the wall the day Apple released the first iPad Pro, which theoretically had better performance than laptops at that time.

    • 2ilent8cho

      Not really a mistake is it? The M1 came out in 2020, its 2022, M2 won't be far away. Intel is playing catch up. Also what generally happens to the performance of these Intel machines when you use them as laptops on battery? It nose dives massively, Apple's chips don't do this, they maintain the same performance on battery or plugged in.

    • Greg Green

      Ridiculous. Apple solved the ultra thin heat problem intel couldn’t solve, and probably still can’t solve. Go back a few years to the notorious throttling intel did on their own chips to keep them from overheating in apple’s tight spaces.


      the real world is quite different than the world you’re living in.

  2. shark47

    If anything, Apple forced Intel to innovate. Hopefully these chips perform well in the real world.

    • bluvg

      That's the pop culture take, but rather inaccurate. Intel didn't just suddenly wake up when the M1 was released and threw Alder Lake together in a few months. These were taped out a long time ago.

      • lvthunder

        And you don't think Intel knew what Apple was doing with the M1 before it was released. All the iOS chips before the M1 tipped Intel off.

        • Oreo

          Tipped off sounds like there was some Apple insider meeting an Intel person at a bar, and divulging Apple’s SoC plans after a few too many drinks. :p


          Apple’s SoCs have been available for years, analyzed by experts and they were on a predictable schedule with fairly predictable increases in performance year-over-year. When the iPhone had comparable or better single-core performance than a top-of-the-line Intel notebook, Intel knew it lost Apple as a customer.

        • bluvg

          Of course, the number people that do the low-level work on these things is shockingly small; Apple, AMD, and Intel (and Tesla) all benefited from having hired CPU legend Jim Keller (though obviously one can't simply take IP with them), and he once commented that the number of people worldwide that work on a particular CPU component (can't remember offhand) was in the 20s, and they're hardly unaware of each other.


          Just saying the pop culture, anthropomorphic characterization of tech battles is rarely an accurate representation. E.g., it's often forgotten that Intel is not at all wedded in perpetuity to x86; they've offered ARM CPUs themselves, and they tried--and failed miserably--with IA64 (which had a huge and under-reported impact on them).

          • Oreo

            Intel has sold XScale >10 years ago (a company that made then competitive ARM cores), and has bet the farm on x86 after the demise of the Itanic (IA64). They even wanted to build GPUs based on the x86 architecture, which morphed into Xeon Phi, which then died. AFAIK it only saw very limited adoption in some super computing projects.


            Yes, Intel seems to open up, but that's because they back is to the wall. Them entering into the discrete GPU space at this stage is, hmmm, interesting, seeing how the discrete GPU market is shrinking. Perhaps they are banking on GPU compute cards for servers in the future?

            • bluvg

              The ARM and IA64 examples were just for illustration. They have been open in the past, and as Gelsinger has indicated, they are quite open to different paths in the future, including outsourcing to TSMC. To paint them as some do as a hopelessly lost and incompetent relic is perhaps popular and makes for a good narrative, but it's just not an accurate depiction. They still have their own fabs, they have some EUV advantages, they're starting to get some state sponsorship (however you feel about that, their competitors have had it), they have a huge IP portfolio, etc. etc.


              These things ebb and flow. Athlon/Opteron were on top for a while. Back in the early-/mid-2000s, IBM Power chips in Macs made things look bleak in Intel/x86 P4/NetBurst land. Then came Core/Conroe and the balance shifted again.


              Not really sure why they're re-entering the discrete GPU market, other than they must see a market opportunity, and they've been doing GPUs for a long time.

              • Oreo

                Thanks for the context, both your comment (streams) put what you wrote into context and it makes more sense now.


                I'd just add that we ought also take the context into account. When the Opteron came out, its big selling point wasn't just performance, but that it also came with 64 bit and machines that were comparably robust as the RISC servers of the day (Alpha, PA RISC, PowerPC, etc.). So Intel came into an expanding market.


                Here, it seems Intel is coming late to a contracting market: discrete GPU are contracting, x86 servers are being replaced by ARM servers, and these ARM-based servers have clear advantages compared to Intel's x86 servers. Even just looking at x86 server hardware itself, AMD's Epyc is still better, especially since efficiency matters in data centers.


                The thing that lets Intel hang on is its grip on consumer and work PCs. But if Qualcomm finally gets around to building competitive chips that combine great battery life with good price and good performance, I think it is game over. (Yes, eventually we will pass groundhog day …)


                I wouldn't want to count out Intel, but it seems to me that Intel will fundamentally have to reinvent itself (including, importantly, its business model) to survive. Just think of IBM, they went from a company that made anything from various types of printers (laser printers, industrial dot matrix printers, etc.), PCs, workstations, servers and mainframes to a company that relies much more on consulting. But that reinvention would also mean Intel will probably have to cede its aspirations of being the dominant leader in the CPU space. They could become suppliers of ARM and RISC-V cores, for example.

                • Donte

                  "x86 servers are being replaced by ARM servers, and these ARM-based servers have clear advantages compared to Intel's x86 servers. Even just looking at x86 server hardware itself, AMD's Epyc is still better, especially since efficiency matters in data centers."


                  I work in the "data center" I manage lots and lots of servers, we just upgraded multiple VMware clusters/hosts in 2021.


                  Not a single discussion point came up about ARM based servers when talking to HP, Dell and Lenovo about their server offerings. Yes it was Xeon vs Epyc for sure and mainly because Epyc options at the higher end of the 1U server segment was less expensive and when you are buying dozens of these things it add's up.


                  We ended up buying some Epyc based servers for DEV environments, because for a 8 node cluster we saved 40k over the Intel option. Now we ended up with Epyc 7302's (16 core) which are 7nm/155watt and run a bit cooler (I think?) but I am not sure if that really matters in our data center. If I had thousands of these things, then sure it the heat reduction and possibly power usage would add up.


                  Anyhow I do NOT see any ARM based server options right now from the big vendors. If they were popular you would think they would have brought it up.

            • Donte

              "seeing how the discrete GPU market is shrinking"


              Say what?

              • Oreo

                Regarding ARM servers, all big cloud companies with the possible exception of Apple have been deploying ARM-based servers. Amazon has developed its own ARM-based SoC, which performs quite well compared to its x86 competition, is much more power efficient and much cheaper, for example. That means all the big companies have been investing billions to make sure their software stacks run on ARM-based hardware.


                You are right that this has not yet trickled down to individual server deployments in the same way, but that is on the horizon, too.


                Regarding graphics cards, yes, totally. Discrete graphics cards have been relegated to gaming desktops and notebooks, very high-end notebooks, certain applications like CAD and 3D modeling and GPU compute. And with GPU compute, at least if you do this professionally, you will eventually need to upgrade to proper compute cards, which are 10x as expensive. These are all niche markets, and at least the consumer portion is IMHO shrinking, because a lot of gaming has moved to consoles.

                • james.h.robinson

                  Last time I checked, there was a GPU shortage, so I don't think there's much evidence of the discrete GPU market shrinking. Demand from crypto miners alone should be enough to keep the dGPU market going.


                  Not to mention the increased use of GPUs for machine learning, which will probably increase demand for them.


                  And finally, even though AWS and other cloud providers are using ARM-based processors for the cloud, x86 is still the majority and will probably continue to be if customers continue to want backward compatibility. And many of those customers will also want dGPUs to go along with that.

      • Oreo

        Likewise, Apple’s SoCs have been around for a decade (their A4 was the first step towards completely custom silicon). Apple’s insistence on e. g. better graphics has been known for a decade as well. And looking at smartphone and tablet SoCs, the increasing importance of accelerators (for ML, de- and encoding tasks, and image processing, for example) has been a completely obvious trend, too. So yes, Intel is way behind here, and Alder Lake is their first serious attempt to catch up.


        IMHO it is too little, too late, though.

  3. lvthunder

    I wonder why Intel doesn't integrate the RAM as Apple did? That seems to be a no-brainer to me. Especially in Ultrabooks where users can't add more RAM anyways.

    • bluvg

      It definitely would make sense in most laptops, so few ever get upgrades during their life. AMD seems to have taken a (much smaller) step down this path with their giant L3 cache. Intel is going the same way with Foveros. Not quite what Apple are doing, but it may achieve most of the benefits without the corresponding limitations and applicability.

    • CasualAdventurer

      I think it will happen on the vendor side. HP or Dell will build a low-cost SoC appliance PC.b It likely won't become wide-spread howecer, because the mindset behind a PC is upgradability. RAM plugged into a port is slower than RAM on a chip, but RAM in a port can be removed and upgraded. Apple wnats its users to think of PC's like cell phones -- you don't upgrade them you replace them.

    • james.h.robinson

      Some enterprises and other large organizations would probably find issue with the low repairability and upgradability of a computer with RAM integrated into the SOC.

  4. alamfour

    Hey Paul, you say that 12900HK is built on a 7nm process. It's not. The process is called Intel 7 but is actually 10nm.

    • bluvg

      The whole "nm" metric is pointless--it used to represent a real measurement, but for all the manufacturers, it's now an extrapolation. Transistor density is a better metric in this regard, and Intel realigned their process naming with the rest of the industry, i.e. "Intel 7" is comparable in density to TSMC's N7.

      • Oreo

        This comparison masks so many issues.


        Apart from the fact that there are several versions of N7 and evolutions, whatdistinguishes Intel’s 10 nm/7 process node from TSMC’s equivalent is that TSMC’s 7 nm nodes had good yields and have been profitable for years. That can’t be said for Intel’s 10 nm/7 process node until until *possibly* recently. (There were a few isolated Intel 10 nm products, but these were quite niche and probably more to test the 10 nm process node and/or be able to claim to analysts that Intel is shipping products fabbed in 10 nm.)


        And clearly, TSMC is expected to start mass production of chips in its 3 nm process this year. So TSMC is several process nodes ahead and these process nodes have high enough yields to be profitable for them.

        • bluvg

          Intel's process issues have been well-reported. The transistor density metric hasn't. I'm not an Intel fanboy (last machine I built was 5950X), just saying the whole "nm" metric is no longer a physical representation, and has been frequently used for inaccurate comparisons.

          • Oreo

            Yeah, but I don't see how this is relevant to the discussion. At best we are discussing whether Intel is 1.5 generations behind, 2 generations behind or 2.5 generations behind. And I think it still matters that Intel's 7 process likely still has lower yields than TSMC's state-of-the-art processes.


            The issue of defining process nodes through either the length scale of some feature or transistor density is old, since at least 2013ish when Samsung, TSMC and Intel brought 14ish nm processes to market. Transistor density is also not a simple criterion either since you can optimize the same manufacturing process for density, low power consumption or high performance. TSMC's N5 process has a 70 % higher transistor density than Intel 7. It is expected that TSMC will release its N3 process this year that packs 70 % more transistors into the same area than its N5 process.

            • bluvg

              The context was the comment suggesting of correction for Paul.


              Agreed that transistor density isn't necessarily simple (they don't even use the same process throughout in packages), but it's a much better single-number metric than "nm".

              • Oreo

                True dat.

                Although I'd still add that a lot of the tech community sometimes forgets about financials. I remember the release of some of Intel's cores on 10 nm (Icelake on 10 nm was released in 2019), but Intel basically lost tons of money on it. Plus, I think they had to actually reduce frequency, so performance was even a wash. As far as I understand the point was to be able to tell analysts that they were shipping 10 nm products (and lying would be subject to legal repercussions) and to test their 10 nm process node.


                The other important technological component is packaging. AMD's big innovation was chiplets and now being able stack a memory module on top of its chiplets. Of course, these ideas aren't new, but manufacturing them at scale with decent yields is the innovation here. Intel has big plans here, but the big question is when these will materialize in products.

Leave a Reply