Intel Alder Lake Core i9-12900K Overclocked to 5.2 GHz on P cores, uses 330 Watts

Published by

Click here to post a comment for Intel Alder Lake Core i9-12900K Overclocked to 5.2 GHz on P cores, uses 330 Watts on our message forum
data/avatar/default/avatar33.webp
Impressive watts numbers, definitely will be king of watts used. Add cooling, Mobo, hefty PSU etc... These will be an expensive build.
https://forums.guru3d.com/data/avatars/m/90/90667.jpg
Embra:

Impressive watts numbers, definitely will be king of watts used. Add cooling, Mobo, hefty PSU etc... These will be an expensive build.
I bet DDR5 gonna cost more than the CPU itself.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
moab600:

I bet DDR5 gonna cost more than the CPU itself.
only for 32Gb +
https://forums.guru3d.com/data/avatars/m/84/84948.jpg
Embra:

Impressive watts numbers, definitely will be king of watts used. Add cooling, Mobo, hefty PSU etc... These will be an expensive build.
Oh noes it uses more power when it's overclocked! Overclocking blerghhh 😱 Now let's wait for some proper reviews 😉 EDIT: I mean older systems have been known to suck more to start with
https://forums.guru3d.com/data/avatars/m/271/271585.jpg
moab600:

I bet DDR5 gonna cost more than the CPU itself.
Wondering this myself. Also, will first gen DDR5 even be as good as top end DDR4? Past two transitions - from DDR2 to DDR3 and then DDR3 to DDR4, the top spec previous gen matched or beat the newer gen at launch.
data/avatar/default/avatar24.webp
what ? why and how ? that's overclocked 24c threadripper territory and 24 real cores that's a 8 cores cpu, yeah i dont count idle cpu they should count since their only goal is to do very little my 9900k was at 190w cinebenchr23 5.0Ghz all 8 cores at 1.28v 217w@5.1ghz@1.36v
data/avatar/default/avatar36.webp
Actually not to bad, Hilbert review of the 11900k had 410w usage of the system when 5200Mhz allcore, take away 50-70watts for the rest of the system and efficiency losses and the 12900k would use a little less power then the 11900k at allcore load. If the 12900k allcore performance is in a spot between a 5900x and a 5950x, like most of the leaks suggest, the 12900k is still 30-40% faster then a 11900k for the same amount of power, I do not think Intel has ever had a generational jump that high the last 10 years. Everything still sounds too good to be true for me, with these leaks.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Im not very surprised, At the frequencies they are targeting , the power consumption on "7nm" (aka 10nm++++) is about what you would expect slightly worse than 14nm. It is also a 16core chip, the 8 E cores might be clocked lower, but they are still going to pull numbers in the ball park of a lower frequency skylake core in avx workloads. that together with the big cores at high frequency is going to pull some serious juice.
SamuelL421:

Wondering this myself. Also, will first gen DDR5 even be as good as top end DDR4? Past two transitions - from DDR2 to DDR3 and then DDR3 to DDR4, the top spec previous gen matched or beat the newer gen at launch.
the latency will be worse pretty much guaranteed, The kits being teased aren't anywhere close to what the best ddr4 kits you can buy are, rough latency on a run of the mill cl 19 4000mt/s kit is going to be 9.5ns , the fastest kit mentioned thus far is the cl 36 ddr5-6600 that equates to about 10.9ns, Latency isn't everything, but you're definitely not going to see any improvements to non-bandwidth bound workloads for the most part, especially on the early platforms, alderlake is basically going to be basically be forced to run the imc at 1/2 rate with ddr5 and possibly 1/4 rate being used for extreme frequencies both of which have an exceedingly severe latency hit. however It will be a big win for integrated graphics, at 6600mt/s in dual channel , that gives you about 105.6gb/s, that not far off from something like a gtx 1650 , especially since the latency is alot lower than your typical gddr,
data/avatar/default/avatar15.webp
user1:

however It will be a big win for integrated graphics, at 6600mt/s in dual channel , that gives you about 105.6gb/s, that not far off from something like a gtx 1650 , especially since the latency is alot lower than your typical gddr,
These are funny numbers to see now that Apple announced 400GB/s memory in M1 Max and their "integrated" graphics will compete with RTX3070/3080.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Glottiz:

These are funny numbers to see now that Apple announced 400GB/s memory in M1 Max and their "integrated" graphics will compete with RTX3070/3080.
I wouldnt consider the m1 max comparable since its going to cost like 3k + and use hbm memory . Like comparing a lada to a ferrari.
https://forums.guru3d.com/data/avatars/m/281/281256.jpg
Wow! lets just digest those numbers 330w just for the CPU, that's a piece of silicone say 4cmx4cm (guess) a whole 3090 uses just 20w more, and this is supposed to be the pinnacle of design right now as its the most current CPU about to be launched, what kind of cooling is that going to take in an average system, its not much change from the current Intel systems yes its great for those headline figures but cmon 330w for 8 cores! I think AMD has those stacked CPUs just sat waiting to spoil this party and at lower consumption and heat but hey lets see on launch Day what happens, at least Intel is able to push AMD which is great otherwise they would stagnate.
https://forums.guru3d.com/data/avatars/m/253/253785.jpg
user1:

I wouldnt consider the m1 max comparable since its going to cost like 3k + and use hbm memory . Like comparing a lada to a ferrari.
I don't think majority here knows what Lada is! 😀:D:D I'm from Croatia, so i know (my father had Samara 1989.)......
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Overclocking for practical purposes is dead and has been for last few years.. Both AMD and Intel have squeezed out every last bit of headroom from their chips as boost. Beyond that, power draw goes through the roof for very little performance gain. Days of Sandy Bridge have long been over folks, time to move on.
data/avatar/default/avatar07.webp
alanm:

Overclocking for practical purposes is dead and has been for last few years.. Both AMD and Intel have squeezed out every last bit of headroom from their chips as boost. Beyond that, power draw goes through the roof for very little performance gain. Days of Sandy Bridge have long been over folks, time to move on.
Actually there are real gains on AMD side without wattage sacrifice . You can get extra 10% performance out of any Zen3 right now.
data/avatar/default/avatar16.webp
Glottiz:

These are funny numbers to see now that Apple announced 400GB/s memory in M1 Max and their "integrated" graphics will compete with RTX3070/3080.
Apple announced 400GB for the 32core GPU and 200GB for the 16core gpu, no mention of the 24core option. That makes me think that that bandwidth is achieved with multiple controller/channel depending on the GPU. I won't compare arm soc with x86 cpu in general, those are different technologies entirely.
data/avatar/default/avatar05.webp
kapu:

Actually there are real gains on AMD side without wattage sacrifice . You can get extra 10% performance out of any Zen3 right now.
Well..... You can increase multi-core performance some by lifting clocks on "worse" cores and then -30 on curve optimizer still does not lift the worst cores up to the best cores' stock levels. But headroom on "best" cores is rather limited, which limits single-core boosts. My best 5900X core allows an curve optimizer setting of -8, with -10 being definitely unstable and even -8 might need more testing. And then you theoretically have to test each core for stability and still don't know for sure, it's a gamble for anyone not just interested in gaming.
data/avatar/default/avatar26.webp
butjer1010:

I don't think majority here knows what Lada is! 😀:D:D I'm from Croatia, so i know (my father had Samara 1989.)......
Lada a car that is made like a soda can and under 5MPH the whole windshield pops out and if you crash that car at 30MPH you're dead. A car that handles like shit too. I'm from America and quickly learned about Lada's after watching a few crash video's on LEAKEDREALITY!
https://forums.guru3d.com/data/avatars/m/90/90667.jpg
I wonder if all AL leaks were made with a beast cooler, guess we'll find out soon.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
asturur:

Apple announced 400GB for the 32core GPU and 200GB for the 16core gpu, no mention of the 24core option. That makes me think that that bandwidth is achieved with multiple controller/channel depending on the GPU.
apple is making different chips for the 2 different versions, the 200gb/s version is a smaller chip and package in general (makes sense since they are putting it in macbook airs) and only has 2 hbm stacks, wouldn't be much point to 400gb/s if the gpu is unable to make use of it. We might see other versions later on if they are binning chips. I could see a 48gb version of the m1 max, with 1 hbm stack disabled/missing .