Intel Core i9-13900K with and without power management settings

Published by

Click here to post a comment for Intel Core i9-13900K with and without power management settings on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

While @Glottiz made too broad of a statement, you're not doing any better by naysaying with cherry-picked results. Blender is highly optimized. The vast majority of software out there does not require the full set of instructions of a P-core. So, the point @Glottiz was trying to make is that in most cases, a pair of E-cores will do you more good than a P-core, let alone many. Considering a single P-core takes up almost the same amount of die space as 4x E-cores, the E-cores start to sound a lot more enticing for most applications. As I say over and over again: buy what you need. If you use Blender often, prioritizing E-cores is a stupid idea. If you're mostly just gaming or doing typical office work, trying to get only P-cores isn't doing you any favors.
blender was the best case scenario for e-cores in other apps it's not nearly a 25% reduction in working time
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
The power draw of a HEDT chip without the performance...
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
No thanks, next cpu if and when i bother is gona have to be 95w tdp or i wont buy it, not intrested in these power hungry heat generating monsters
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
BLEH!:

The power draw of a HEDT chip without the performance...
Exactly threadripper is going to have a laugh at this
data/avatar/default/avatar29.webp
tsunami231:

No thanks, next cpu if and when i bother is gona have to be 95w tdp or i wont buy it, not intrested in these power hungry heat generating monsters
Luckily there is a feature in the bios, so you can set the "monster cpu" to use 95w max, if you want. A feature even people that complain about power draw doesn't use 🙄 If playing games, tdp doesn't really matter. If you are playing Cinebench (ofc you are 😛), then lower tdp is to be prefered 🙂
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
what feature is that? there is no point in play with current cause i dount have one those cpu and wont any time soon
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

blender was the best case scenario for e-cores in other apps it's not nearly a 25% reduction in working time
What? How is a process that is optimized for complex instructions a best-case scenario for CPU cores that lack such instructions? It's the exact opposite - it's the worst case scenario for them. The E-cores are built to handle simple tasks efficiently, so, when you throw advanced rendering at it, of course it's going to fail miserably.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

What? How is a process that is optimized for complex instructions a best-case scenario for CPU cores that lack such instructions? It's the exact opposite - it's the worst case scenario for them. The E-cores are built to handle simple tasks efficiently, so, when you throw advanced rendering at it, of course it's going to fail miserably.
Cause it was the biggest performance impact in the whole test other apps werent even close to 20% time reduction
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

I'm not so sure the node is the problem, and I'm not quick to point fingers at the architecture either. The problem really comes down to them continuously trying to improve single-threaded performance, because that's the only way they can affordably defeat AMD. Clock speeds do not scale up proportionately with power consumption. Intel is already pushing these chips to their limits. Only within the past 3 or so years have operating systems and mainstream software start to optimize software for more threads and instructions more advanced than SSE2. Intel can't convince developers to develop intelligently and not everything makes sense to multi-thread. Intel needs to show investors that they're making something better than last-gen, so, they have to just keep pushing these chips to levels that are horribly inefficient. Since they also need to increase their core count to compete with AMD and don't use a more modular design like AMD, they can't sustain competitive performance or pricing using only P-cores, so, that's where the E-cores come in. When you look at Intel's mid-range mobile chips, they're very competitive in terms of performance-per-watt. From what I recall, AMD is still better, but we're not talking huge margins anymore. So long as Intel keeps pushing for more and unnecessary instructions like AVX512, the e-cores will need to stay. P-cores carry too much baggage to run efficiently or be produced affordably. If AMD wants to keep up with all of Intel's instructions, they too are probably going to have to go to E-cores at some point.
the largest part of the problem is Intel has maxed out the potential single-thread performance for this uArch. ML will be quite different and will not require nearly as much power to improve the single thread performance of RL. both for node shrink and uArch reasons. we are at the end of the road for monolithic cpu's from Intel. even now RL technically counts as a SoC, and all future uArchs are definitely SoC like AMD. at that point Intel will have a far lower cost of manufacture with a far higher yield (like AMD) which applies directly to P-cores. i seriously doubt whether any ML will have more than four E cores (other than mobile), but as they will be on a discrete chiplet idk.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
cucaulay malkin:

Cause it was the biggest performance impact in the whole test other apps werent even close to 20% time reduction
you're talking synthetics
https://forums.guru3d.com/data/avatars/m/191/191533.jpg
Horus-Anhur:

Plenty of Intel fanboys that will buy it.
And there's plenty of AMD fanboys that have yet to post Zen 4 benchmarks yet continue to run their pie hole.
data/avatar/default/avatar26.webp
Not that surprising for such clocks and so many threads, but all of this CPU stuff seems excessive for all but the most niche cases. In 2017 a i7 8700K was not excessive for gaming, you'd get games like AC: Origins that could use the whole 12 threads inside more demanding areas like a city. The competition is now pushing into HEDT territory for both i7 and i9 and the prices and power usage are as expected for such CPUs.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
nizzen:

Luckily there is a feature in the bios, so you can set the "monster cpu" to use 95w max, if you want. A feature even people that complain about power draw doesn't use 🙄 If playing games, tdp doesn't really matter. If you are playing Cinebench (ofc you are 😛), then lower tdp is to be prefered 🙂
Is kind of a useless feature to carpet limit the CPU and let it do it automatically. You undervolt the crap out of it instead . My 5800x runs happily @1.185 v (1.15 after the vdrop) and boosts all core to 4.6 GHz
data/avatar/default/avatar05.webp
Venix:

Is kind of a useless feature to carpet limit the CPU and let it do it automatically. You undervolt the crap out of it instead . My 5800x runs happily @1.185 v (1.15 after the vdrop) and boosts all core to 4.6 GHz
It is not at all useless, I power limit my 5600g mediacenter for silent running and temperature limit my 5800x for peace of mind. It is still possible to undervolt when running limiters. With 27 degrees ambient my 5800x allcores at 4550 and at 23 ambient it runs at 4650 on a Noctua 15 aircooler. The temp limits still allows for 4850 boost in games.
data/avatar/default/avatar02.webp
Horus-Anhur:

Plenty of Intel fanboys that will buy it.
Why are you talking about fanbys? Isn't t
Venix:

Is kind of a useless feature to carpet limit the CPU and let it do it automatically. You undervolt the crap out of it instead . My 5800x runs happily @1.185 v (1.15 after the vdrop) and boosts all core to 4.6 GHz
How high is your all core with 1.15v in? Boost doesn't mean anything if the boost is in idle mode and boosting for 2 ms at a time.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Horus-Anhur:

Plenty of Intel fanboys that will buy it.
isn't this going to be the best cpu for mixed gaming + productivity, for now at least the cost is high and power is high, still, the performance is going to be there I'd personally stay away from this but I'm not everyone
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
cucaulay malkin:

isn't this going to be the best cpu for mixed gaming + productivity, for now at least the cost is high and power is high, still, the performance is going to be there I'd personally stay away from this but I'm not everyone
That would be the 13700K. Considering similar prices to Alder lake, the 700K sku should be the better option. Remember the MSRP of the 12900K was 600 US$. But the MSRP of the 12700K was 420$US. But Intel already confirmed they are going to increase prices in Q4 2022. So we are probably expected, once again, to pay 170US$ more, to get 8 extra E-Cores. That is a terrible deal to me. Like you and I have showed here, few applications do benefit from these e-cores, mostly renderers. And those that do, it's not that much off an improvement. If someone is that desperate to have a fast CPU for rendering, they are likely to be considering HDET instead. And for gaming, 8 P-cores are more than enough, ad that, the 13700K has just as much the 13900K.And it has an unlocked multiplier.
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
Horus-Anhur:

That would be the 13700K. Considering similar prices to Alder lake, the 700K sku should be the better option. Remember the MSRP of the 12900K was 600 US$. But the MSRP of the 12700K was 420$US. But Intel already confirmed they are going to increase prices in Q4 2022. So we are probably expected, once again, to pay 170US$ more, to get 8 extra E-Cores. That is a terrible deal to me. Like you and I have showed here, few applications do benefit from these e-cores, mostly renderers. And those that do, it's not that much off an improvement. If someone is that desperate to have a fast CPU for rendering, they are likely to be considering HDET instead. And for gaming, 8 P-cores are more than enough, ad that, the 13700K has just as much the 13900K.And it has an unlocked multiplier.
what about the 13900K having better binned P-cores and IMC than 13700K, on top of the extra 8 E-cores?
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Krizby:

what about the 13900K having better binned P-cores and IMC than 13700K, on top of the extra 8 E-cores?
For a difference of 170US$, or more. I wouldn't consider it worth while. The 13700K would still be the best CPU all around. Better value, better power efficiency, and almost the same performance. The 13900K is only for someone that wants the best at all cost. But I wound not recommend it at all.
data/avatar/default/avatar25.webp
Krizby:

what about the 13900K having better binned P-cores and IMC than 13700K, on top of the extra 8 E-cores?
All marketing to make hardware-crazed forum posters spend extra money for CPU most of them will never take full advantage of. With 12700K you got 99% of 12900K performance in gaming and day-to-day consumer programs, but at only 60% cost. It will be similar with 13700K vs 13900K. Unless you are someone who renders 3D scenes 24/7, xx700K class CPU will always be the smarter buy.