12th-Generation Intel Core Chipsets Come to Mobile PCs

Intel today announced 28 new 12th-generation Core mobile processors along with an additional 22 desktop processors. The new chips will appear in PCs throughout 2022.

“Intel’s new performance hybrid architecture is helping to accelerate the pace of innovation and the future of compute,” Intel executive vice president and general manager Gregory Bryant said. “And, with the introduction of 12th generation Intel Core mobile processors, we are unlocking new experiences and setting the standard of performance with the fastest processor for a laptop – ever.”

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

That faster-ever mobile processor is the flagship 12th-generation Intel Core mobile CPU, the Intel Core i9-12900HK. Built on a 7-nm process, the Core i9-12900HK utilizes Intel’s new hybrid architecture, with both performance cores (P-cores) and efficiency cores (E-cores) and intelligent workload prioritization and management distribution. It will be available at frequencies of up to 5 GHz with 14 cores (6 P-cores and 8 E-cores), and it offers a 28 percent performance improvement over its predecessor.

Intel also introduced various 65- and 35-watt 12th-generation Core desktop processors aimed at gaming, creation, and productivity that are available now. And it disclosed its plans for upcoming 12th-generation Core U- and P-series mobile processors that power new ultra-thin-and-light laptops.

 

Tagged with

Share post

Please check our Community Guidelines before commenting

Conversation 42 comments

  • VancouverNinja

    Premium Member
    04 January, 2022 - 3:33 pm

    <p>K. That was faster than I had expected Intel to exceed Apple’s gambit with their own silicon. If Intel continues their gains over the coming years I can only see Apple’s move to produce their own processor as a mistake. </p><p><br></p><p>It will be very interesting to watch this play out. </p>

    • rob_segal

      Premium Member
      04 January, 2022 - 3:49 pm

      <p>I wouldn’t say these exceed Apple’s chips in any way quite yet. When it comes to performance per watt, I expect Apple’s latest M1 chips to be better than these. Apple using its own chips is not a mistake by any means.</p>

    • lvthunder

      Premium Member
      04 January, 2022 - 3:51 pm

      <p>It’s never a mistake to be able to control your own destiny. Even if Intel’s chips are comparable Apple can still tailor their chips to the exact machine they are building. They do this with hardware acceleration for ProRes now.</p>

      • VancouverNinja

        Premium Member
        04 January, 2022 - 5:46 pm

        <p>Not sure I can agree with the "Control Your Own Destiny" reason. There is nothing that Apple PCs can do that any other Windows PC cannot do from technical standpoint – at least that manifests itself to a capability unavailable on WPCs. All Apple has done is now pit itself against the best and brightest talent of the rest of the industry. One must suspend belief to buy into the concept that Apple will be the innovation leader moving forward with their own processors and that remainder of the industry will not be able to exceed Apple. Highly unlikely.</p><p><br></p><p>I mentioned on a post many months back that this effort of Apple could put them at a serious disadvantage many years down the road; with how fast Intel will be able to claim the fastest processors again, it looks to me like Apple may have taken too big of a bite for its own good.</p><p><br></p><p>The main point I am making is Apple could lose the performance reasons to buy their very expensive offerings in the the not too distant future. Or we find that we are nearing a technological limit where producing either RISC or ARM based processors can no longer experience crazy performance leaps over one another for anyone to care. It will be interesting to see if the gamble works out for them. </p><p><br></p><p><br></p>

        • lvthunder

          Premium Member
          04 January, 2022 - 7:01 pm

          <p>What I’m saying is if Apple wanted to make a computer just for video editors they could add custom parts to accelerate video like the ProRes hardware support they have in the new M1’s. That’s not present (to my knowledge) in any of these Intel chips. If they wanted a gaming computer they could take out that ProRes stuff and add more 3D rendering. Stuff like that. I’m not saying Intel couldn’t do stuff like that, but they don’t seem to be doing that now. They seem to be going the generic route.</p>

          • Donte

            04 January, 2022 - 7:45 pm

            <p>You need to Google "Intel Quick Sync". It came out in 2011. Its why most Adobe Premiere pro’s stick with Intel.</p>

        • Saarek

          05 January, 2022 - 5:39 am

          <h3>@<a href="https://www.thurrott.com/hardware/260920/12th-generation-intel-core-chipsets-come-to-mobile-pcs?replytocom=684116#comment-684116&quot; target="_blank">VancouverNinja</a> You seem to be assuming as wrote that Intel will suddenly start hitting every mark in their strategy without delay over a number of years. The last 10 years or so of Intel’s history indicates that this is a very rocky place to land ones assumtions upon.</h3><h3><br></h3><h3>Apple’s A Series CPU’s have jumped up in performance every single year since their initial release starting with the A4. Yes, the gains have slowed down over the last few years, but still, every single year since 2010 without fail Apple has managed to improve, manufacture and release their improved designs.</h3><h3><br></h3><h3>Apple does not have to be the best for everyone on the planet, they just have to offer a compelling product to those that have or are interested in the Mac platform. </h3>

        • Greg Green

          09 January, 2022 - 9:51 am

          <p>You’re not familiar enough with apple devices. Intel couldn’t get the performance apple needed in the spaces apple allowed. And they still can’t. Only arm can deliver the low wattage, low heat, high performance that apple needs in their ultra thin devices. </p><p><br></p><p>apple’s improvements in their chips over the last five years has exceeded intel’s important in chips by a wide margin. And so far alder lake is more power hungry than Ryzen, and far more power hungry than M1 chips. So intel really hasn’t arrived at a solution yet.</p>

    • pecosbob04

      04 January, 2022 - 4:23 pm

      <p>"<span style="color: rgb(0, 0, 0);">The new chips will appear in PCs throughout 2022." throughout 2022 seems somewhat imprecise in respect to availability. The M2 chip may or may not be available at some point in 2022. Whether that point is before or after ‘throughout’ is indeterminant. But more </span>importantly what evidence exists that this chip will be more performant than an M1 MAX in real world workflows? Other than a press release of course.</p>

    • Stabitha.Christie

      04 January, 2022 - 6:43 pm

      <p>I think they mistake here is seeing Apple’s move as being motivated by raw processing power. I don’t think that was the motivation. When Apple dropped the PowerPC Steve said something to the effect of "we couldn’t make the products we could imagine with PowerPC chips". Within 18 months of that move Apple released the MacBook Air which was the thinnest notebook you could get at the time. It wasn’t about processing power it was about thermodynamics. Apple couldn’t make the MBA with a PowerPC because they were simply too warm. It was a similar motivation to start making the A series. Apple wanted to add features to the iPhone that existing processors couldn’t do. I suspect the move to Apple Silicon is similarly motivated. And while they are just getting started look at the new iMac. It’s 11.5 mm thick, or as as thick as the Apple Watch. They just couldn’t have done the with an Intel processor. So looking at this from simply a processing power standpoint is the wrong lens. Yes, Apple will have to stay competitive on the processing side but huge gains in processing power isn’t why Apple is making the move. </p>

      • Donte

        04 January, 2022 - 7:39 pm

        <p>I am a huge Apple fan but you are giving them way too much credit. First and foremost, they are making their own chips now to MAXIMIZE profits. This the Apple way. </p><p><br></p><p>The bought PA semiconductor and ARM license years ago. They will use their financial might to get first options on new silicon from TSMC since they do not make their own chips. The combination of all of this is to cut out anyone else and maximize profits.</p><p><br></p><p>Sure, they have great power performance, they should since they came from the smart phone world into the computing world. Let’s see how they do when the start gluing together 4 – M1Max’s to achieve better performance in a Mac Pro so they can compete against AMD and Intel at the workstation level for something like Adobe Premiere rendering hours of 4-8K video. </p><p><br></p><p>I am not sure anyone cares how thin the new iMac is. It is running a smartphone CPU in the chin of the device, so it’s got lots of room left and right to cool it.</p><p><br></p><p><br></p>

        • Stabitha.Christie

          04 January, 2022 - 8:13 pm

          <p>That is certainly one take. </p>

        • red.radar

          Premium Member
          05 January, 2022 - 12:12 pm

          <p>Its possible that both things could be true. They wanted to improve functionality and maximize profits. </p>

        • Greg Green

          09 January, 2022 - 9:38 am

          <p>Again I don’t think so. The intel self throttling debacle was enough reason to dump intel. They couldn’t do what they needed to do with intel inside, so they dumped intel.</p>

          • james.h.robinson

            09 January, 2022 - 8:15 pm

            <p>But Apple could have gone to AMD, Nvidia, or even Qualcomm when they "dumped" Intel. Instead, Apple opted to increase their economies of scale by having all their devices run on Apple Silicon. To me, the writing was on the wall the day Apple released the first iPad Pro, which theoretically had better performance than laptops at that time.</p>

    • 2ilent8cho

      05 January, 2022 - 4:28 am

      <p>Not really a mistake is it? The M1 came out in 2020, its 2022, M2 won’t be far away. Intel is playing catch up. Also what generally happens to the performance of these Intel machines when you use them as laptops on battery? It nose dives massively, Apple’s chips don’t do this, they maintain the same performance on battery or plugged in. </p>

    • Greg Green

      09 January, 2022 - 9:36 am

      <p>Ridiculous. Apple solved the ultra thin heat problem intel couldn’t solve, and probably still can’t solve. Go back a few years to the notorious throttling intel did on their own chips to keep them from overheating in apple’s tight spaces.</p><p><br></p><p>the real world is quite different than the world you’re living in.</p>

  • alamfour

    04 January, 2022 - 4:20 pm

    <p>Hey Paul, you say that 12900HK is built on a 7nm process. It’s not. The process is called Intel 7 but is actually 10nm.</p>

    • bluvg

      04 January, 2022 - 6:23 pm

      <p>The whole "nm" metric is pointless–it <em>used to </em>represent a real measurement, but for all the manufacturers, it’s now an extrapolation. Transistor density is a better metric in this regard, and Intel realigned their process naming with the rest of the industry, i.e. "Intel 7" is comparable in density to TSMC’s N7.</p>

      • Oreo

        04 January, 2022 - 7:09 pm

        <p>This comparison masks so many issues. </p><p><br></p><p>Apart from the fact that there are several versions of N7 and evolutions, what<span style="color: rgb(0, 0, 0);">distinguishes Intel’s 10 nm/7 process node from TSMC’s equivalent is that TSMC’s 7 nm nodes had good yields and have been profitable for years. That can’t be said for Intel’s 10 nm/7 process node until until *possibly* recently. (There were a few isolated Intel 10 nm products, but these were quite niche and probably more to test the 10 nm process node and/or be able to claim to analysts that Intel is shipping products fabbed in 10 nm.) </span></p><p><br></p><p><span style="color: rgb(0, 0, 0);">And clearly, TSMC is expected to start mass production of chips in its 3 nm process this year. So TSMC is several process nodes ahead and these process nodes have high enough yields to be profitable for them. </span></p>

        • bluvg

          04 January, 2022 - 7:20 pm

          <p>Intel’s process issues have been well-reported. The transistor density metric hasn’t. I’m not an Intel fanboy (last machine I built was 5950X), just saying the whole "nm" metric is no longer a physical representation, and has been frequently used for inaccurate comparisons.</p>

          • Oreo

            04 January, 2022 - 8:50 pm

            <p>Yeah, but I don’t see how this is relevant to the discussion. At best we are discussing whether Intel is 1.5 generations behind, 2 generations behind or 2.5 generations behind. And I think it still matters that Intel’s 7 process likely still has lower yields than TSMC’s state-of-the-art processes. </p><p><br></p><p>The issue of defining process nodes through either the length scale of some feature or transistor density is old, since at least 2013ish when Samsung, TSMC and Intel brought 14ish nm processes to market. Transistor density is also not a simple criterion either since you can optimize the same manufacturing process for density, low power consumption or high performance. TSMC’s N5 process has a 70 % higher transistor density than Intel 7. It is expected that TSMC will release its N3 process this year that packs 70 % more transistors into the same area than its N5 process. </p>

            • bluvg

              04 January, 2022 - 10:16 pm

              <p>The context was the comment suggesting of correction for Paul. </p><p><br></p><p>Agreed that transistor density isn’t necessarily simple (they don’t even use the same process throughout in packages), but it’s a much better single-number metric than "nm".</p>

              • Oreo

                05 January, 2022 - 1:08 am

                <p>True dat. </p><p>Although I’d still add that a lot of the tech community sometimes forgets about financials. I remember the release of some of Intel’s cores on 10 nm (Icelake on 10 nm was released in 2019), but Intel basically lost tons of money on it. Plus, I think they had to actually reduce frequency, so performance was even a wash. As far as I understand the point was to be able to tell analysts that they were shipping 10 nm products (and lying would be subject to legal repercussions) and to test their 10 nm process node. </p><p><br></p><p>The other important technological component is packaging. AMD’s big innovation was chiplets and now being able stack a memory module on top of its chiplets. Of course, these ideas aren’t new, but manufacturing them at scale with decent yields is the innovation here. Intel has big plans here, but the big question is when these will materialize in products. </p>

  • shark47

    04 January, 2022 - 5:29 pm

    <p>If anything, Apple forced Intel to innovate. Hopefully these chips perform well in the real world.</p>

    • bluvg

      04 January, 2022 - 6:28 pm

      <p>That’s the pop culture take, but rather inaccurate. Intel didn’t just suddenly wake up when the M1 was released and threw Alder Lake together in a few months. These were taped out a long time ago. </p>

      • lvthunder

        Premium Member
        04 January, 2022 - 6:58 pm

        <p>And you don’t think Intel knew what Apple was doing with the M1 before it was released. All the iOS chips before the M1 tipped Intel off.</p>

        • Oreo

          04 January, 2022 - 7:12 pm

          <p>Tipped off sounds like there was some Apple insider meeting an Intel person at a bar, and divulging Apple’s SoC plans after a few too many drinks. :p </p><p><br></p><p>Apple’s SoCs have been available for years, analyzed by experts and they were on a predictable schedule with fairly predictable increases in performance year-over-year. When the iPhone had comparable or better single-core performance than a top-of-the-line Intel notebook, Intel knew it lost Apple as a customer. </p>

          • Stabitha.Christie

            04 January, 2022 - 7:20 pm

            <p>It was like when that Apple guy left his prototype iPhone in a bar only this time someone left an M1 in the bowl of peanuts. Ooops</p>

        • bluvg

          04 January, 2022 - 7:15 pm

          <p>Of course, the number people that do the low-level work on these things is shockingly small; Apple, AMD, and Intel (and Tesla) all benefited from having hired CPU legend Jim Keller (though obviously one can’t simply take IP with them), and he once commented that the number of people worldwide that work on a particular CPU component (can’t remember offhand) was in the 20s, and they’re hardly unaware of each other. </p><p><br></p><p>Just saying the pop culture, anthropomorphic characterization of tech battles is rarely an accurate representation. E.g., it’s often forgotten that Intel is not at all wedded in perpetuity to x86; they’ve offered ARM CPUs themselves, and they tried–and failed miserably–with IA64 (which had a huge and under-reported impact on them). </p>

          • Oreo

            04 January, 2022 - 8:34 pm

            <p>Intel has sold XScale &gt;10 years ago (a company that made then competitive ARM cores), and has bet the farm on x86 after the demise of the Itanic (IA64). They even wanted to build GPUs based on the x86 architecture, which morphed into Xeon Phi, which then died. AFAIK it only saw very limited adoption in some super computing projects. </p><p><br></p><p>Yes, Intel seems to open up, but that’s because they back is to the wall. Them entering into the discrete GPU space at this stage is, hmmm, interesting, seeing how the discrete GPU market is shrinking. Perhaps they are banking on GPU compute cards for servers in the future? </p>

            • bluvg

              04 January, 2022 - 10:12 pm

              <p>The ARM and IA64 examples were just for illustration. They have been open in the past, and as Gelsinger has indicated, they are quite open to different paths in the future, including outsourcing to TSMC. To paint them as some do as a hopelessly lost and incompetent relic is perhaps popular and makes for a good narrative, but it’s just not an accurate depiction. They still have their own fabs, they have some EUV advantages, they’re starting to get some state sponsorship (however you feel about that, their competitors have had it), they have a huge IP portfolio, etc. etc. </p><p><br></p><p>These things ebb and flow. Athlon/Opteron were on top for a while. Back in the early-/mid-2000s, IBM Power chips in Macs made things look bleak in Intel/x86 P4/NetBurst land. Then came Core/Conroe and the balance shifted again. </p><p><br></p><p>Not really sure why they’re re-entering the discrete GPU market, other than they must see a market opportunity, and they’ve been doing GPUs for a long time.</p>

              • Oreo

                05 January, 2022 - 1:01 am

                <p>Thanks for the context, both your comment (streams) put what you wrote into context and it makes more sense now. </p><p><br></p><p>I’d just add that we ought also take the context into account. When the Opteron came out, its big selling point wasn’t just performance, but that it also came with 64 bit and machines that were comparably robust as the RISC servers of the day (Alpha, PA RISC, PowerPC, etc.). So Intel came into an expanding market. </p><p><br></p><p>Here, it seems Intel is coming late to a contracting market: discrete GPU are contracting, x86 servers are being replaced by ARM servers, and these ARM-based servers have clear advantages compared to Intel’s x86 servers. Even just looking at x86 server hardware itself, AMD’s Epyc is still better, especially since efficiency matters in data centers. </p><p><br></p><p>The thing that lets Intel hang on is its grip on consumer and work PCs. But if Qualcomm finally gets around to building competitive chips that combine great battery life with good price and good performance, I think it is game over. (Yes, eventually we will pass groundhog day …)</p><p><br></p><p>I wouldn’t want to count out Intel, but it seems to me that Intel will fundamentally have to reinvent itself (including, importantly, its business model) to survive. Just think of IBM, they went from a company that made anything from various types of printers (laser printers, industrial dot matrix printers, etc.), PCs, workstations, servers and mainframes to a company that relies much more on consulting. But that reinvention would also mean Intel will probably have to cede its aspirations of being the dominant leader in the CPU space. They could become suppliers of ARM and RISC-V cores, for example. </p>

                • Donte

                  06 January, 2022 - 11:17 am

                  <p>"<span style="color: rgb(0, 0, 0);">x86 servers are being replaced by ARM servers, and these ARM-based servers have clear advantages compared to Intel’s x86 servers. Even just looking at x86 server hardware itself, AMD’s Epyc is still better, especially since efficiency matters in data centers."</span></p><p><br></p><p><span style="color: rgb(0, 0, 0);">I work in the "data center" I manage lots and lots of servers, we just upgraded multiple VMware clusters/hosts in 2021. </span></p><p><br></p><p>Not a single discussion point came up about ARM based servers when talking to HP, Dell and Lenovo about their server offerings. Yes it was Xeon vs Epyc for sure and mainly because Epyc options at the higher end of the 1U server segment was less expensive and when you are buying dozens of these things it add’s up. </p><p><br></p><p>We ended up buying some Epyc based servers for DEV environments, because for a 8 node cluster we saved 40k over the Intel option. Now we ended up with Epyc 7302’s (16 core) which are 7nm/155watt and run a bit cooler (I think?) but I am not sure if that really matters in our data center. If I had thousands of these things, then sure it the heat reduction and possibly power usage would add up.</p><p><br></p><p>Anyhow I do NOT see any ARM based server options right now from the big vendors. If they were popular you would think they would have brought it up.</p>

            • Donte

              06 January, 2022 - 11:02 am

              <p>"<span style="color: rgb(0, 0, 0);">seeing how the discrete GPU market is shrinking"</span></p><p><br></p><p><span style="color: rgb(0, 0, 0);">Say what?</span></p>

              • Oreo

                06 January, 2022 - 9:11 pm

                <p>Regarding ARM servers, all big cloud companies with the possible exception of Apple have been deploying ARM-based servers. Amazon has developed its own ARM-based SoC, which performs quite well compared to its x86 competition, is much more power efficient and much cheaper, for example. That means all the big companies have been investing billions to make sure their software stacks run on ARM-based hardware. </p><p><br></p><p>You are right that this has not yet trickled down to individual server deployments in the same way, but that is on the horizon, too. </p><p><br></p><p>Regarding graphics cards, yes, totally. Discrete graphics cards have been relegated to gaming desktops and notebooks, very high-end notebooks, certain applications like CAD and 3D modeling and GPU compute. And with GPU compute, at least if you do this professionally, you will eventually need to upgrade to proper compute cards, which are 10x as expensive. These are all niche markets, and at least the consumer portion is IMHO shrinking, because a lot of gaming has moved to consoles. </p>

                • james.h.robinson

                  09 January, 2022 - 8:01 pm

                  <p>Last time I checked, there was a GPU shortage, so I don’t think there’s much evidence of the discrete GPU market shrinking. Demand from crypto miners alone should be enough to keep the dGPU market going.</p><p><br></p><p>Not to mention the increased use of GPUs for machine learning, which will probably increase demand for them.</p><p><br></p><p>And finally, even though AWS and other cloud providers are using ARM-based processors for the cloud, x86 is still the majority and will probably continue to be if customers continue to want backward compatibility. And many of those customers will also want dGPUs to go along with that.</p>

      • Oreo

        04 January, 2022 - 7:01 pm

        <p>Likewise, Apple’s SoCs have been around for a decade (their A4 was the first step towards completely custom silicon). Apple’s insistence on e. g. better graphics has been known for a decade as well. And looking at smartphone and tablet SoCs, the increasing importance of accelerators (for ML, de- and encoding tasks, and image processing, for example) has been a completely obvious trend, too. So yes, Intel is way behind here, and Alder Lake is their first serious attempt to catch up. </p><p><br></p><p>IMHO it is too little, too late, though. </p>

  • lvthunder

    Premium Member
    04 January, 2022 - 7:03 pm

    <p>I wonder why Intel doesn’t integrate the RAM as Apple did? That seems to be a no-brainer to me. Especially in Ultrabooks where users can’t add more RAM anyways.</p>

    • bluvg

      04 January, 2022 - 7:38 pm

      <p>It definitely would make sense in most laptops, so few ever get upgrades during their life. AMD seems to have taken a (much smaller) step down this path with their giant L3 cache. Intel is going the same way with Foveros. Not quite what Apple are doing, but it may achieve most of the benefits without the corresponding limitations and applicability. </p>

    • CasualAdventurer

      05 January, 2022 - 10:38 am

      <p>I think it will happen on the vendor side. HP or Dell will build a low-cost SoC appliance PC.b It likely won’t become wide-spread howecer, because the mindset behind a PC is upgradability. RAM plugged into a port is slower than RAM on a chip, but RAM in a port can be removed and upgraded. Apple wnats its users to think of PC’s like cell phones — you don’t upgrade them you replace them. </p>

    • james.h.robinson

      09 January, 2022 - 8:12 pm

      <p>Some enterprises and other large organizations would probably find issue with the low repairability and upgradability of a computer with RAM integrated into the SOC. </p>

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC