Intel Core i9-12900KS just 9% in faster Cinebench R23 Multi-Core compared to 5950X

Published by

Click here to post a comment for Intel Core i9-12900KS just 9% in faster Cinebench R23 Multi-Core compared to 5950X on our message forum
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
nizzen:

What you are saying is everyone MUST play cinebench because of many cores? Like Ryzen 1700x and 1800x was very good for rendering per core and very efficent, but it was VERY slow in games. So everything depends on what you are using the cpu for. Is it efficent in gaming or rendering. Ryzen cpu's is very good, there is no doubt. Too bad very few ACTUALLY do rendering with the cpu's.... They just playing cinebench....
They were not slow in games for the actual price point they had. They did exactly what they were designed for. And people expected first gen ryzen to defeat intel out of the gate, and those people were stupid. And your breakdowns and understanding of what Aura89 said if baffling. You are stuck in left field why we all are centerfield.
data/avatar/default/avatar06.webp
Agonist:

They were not slow in games for the actual price point they had. They did exactly what they were designed for. And people expected first gen ryzen to defeat intel out of the gate, and those people were stupid. And your breakdowns and understanding of what Aura89 said if baffling. You are stuck in left field why we all are centerfield.
50% slower in minimumfps compared to 8700k in cpubound games. That is BAD and actual very slow for it's price point. PS: I had 1700x/1800x/8700k and 9900k at the same time. Gaming performance was sad, compared to 8700k with fast ddr4. But if you compare 1800x with fast memory and 8700k with slow memory, the difference was less. Too bad for Ryzen 1 series that 3200mhz was pretty much the max. 8700k was running with 4700mhz memory 😀
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
nizzen:

What you are saying is everyone MUST play cinebench because of many cores?
I'm saying if you're not using the full CPU for your tasks you're not getting the entire picture Find me a game that'll use intel or AMDs maximum amount of cores and thread, then get back to me Until then, claiming that a CPU is not inefficient because "no one uses the CPU at its max" is a bogus claim. And last i checked this is a hardware forum, not a pure gaming forum, so why you keep saying "play cinebench" is beyond me, 12900k, ks, 5900x, 5950x, and countless other CPUs with many cores main purpose is not gaming, doesn't mean you can't buy one for gaming, but your assertion that people "play cinebench" is just pure nonsense made to disregard the performance benefits and efficiency benefits of CPUs that provide more then what gaming needs. If you're not that guy, aka someone who uses their PC for more then just gaming, professional work, stop pretending to be that guy.
nizzen:

50% slower in minimumfps compared to 8700k in cpubound games.
lol...the random percentages and numbers you keep bringing up is amazing. Please show your work. Seems to me you just want to make claim after claim after claim without anything to actually back you up and hope that everyone just sees what you write and not question it.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nizzen:

Almost every new game is cpu bound, if you have good enough gpu. You are talking about slow gpu that are gpubound in every game. That's the difference. Not everyone playing games in 4k 60hz with v-sync on 😉
Ugh.... literally any piece of hardware can be the bottleneck if you cripple something else. Get real here. Most modern CPUs can yield framerates that even 240Hz displays can't keep up with, and in most cases, you don't really gain anything when going higher than 144Hz. You can see a difference, but very few people would be able to actually play better as a result of that difference. So ultimately, no, modern CPUs are not practical bottlenecks in gaming. An i3 7100 can play CS:GO at over 250FPS at 1080p ultra settings using a 3070. The performance steadily declines as you worsen the GPU, implying even that mediocre CPU is still not maxed out. Remember, that's a 5-year-old locked CPU with specs that were mediocre even for its time. Games have not evolved that much in CPU requirements. As stated before, your 12900K uses less power because it isn't working that hard, so, you've kinda proven my point how it's unnecessary for gaming purposes. So, if it's overkill for gaming and too power hungry for productivity, then what makes it a good choice? A 12600K is all anyone really needs to play just about any modern game.
data/avatar/default/avatar02.webp
NewTRUMP Order:

So the 12900ks model comes 200mhz faster out of the box. Does this mean it overclocks 200mhz over a 12900k max overclock? The 12900ks costs 26% more than the 12900k. I mean save your money and just overclock your 12900k just 200mhz and you now have a 12900ks. https://media4.giphy.com/media/dbd6jN0Atb9i8/giphy.gif?cid=ecf05e47nnlrtbuq7bytxzlullqaezmk1ftek693am3qouix&rid=giphy.gif&ct=g
A 12900KS seems to be averaging 5.4 ghz in R23 with a 360 AIO. Only golden 12900K's will do that, and a 12900KS is a binned 12900k WITHOUT AVX512 possible. It's not a "new" chip. But so far the bins look like +100 to +200 mhz better, or 100mv lower voltage previously. So no, you can't just overclock a 12900k and get a KS, not all K's are even good samples. There were weak 9900K's that wouldn't even do 5 ghz on all cores, so the same argument applied there too. If you're rich, go ahead and buy a 12900KS no matter what you have. Money isn't so critical to rich people and enjoy messing with it for the fun and E-peen. If you're a normal person and don't have a Z690 yet, it makes decent sense to pay the premium for a 12900KS (versus a regular 12900K, especially since good samples are getting binned now) +motherboard and upgrade at once, you'll get a good chip that clocks well or can run at lower frequencies at absurdly low vcore (5 ghz all cores at 1.0v may be possible in R23) If you're a normal person and already have a 12900K, wait for a 13900K, get the same good KS bins on the P cores and get double the E cores and possibly a better IMC.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@nizzen I get that you get a lot of hardware to play with tweak and push it to the limits witch is awesome! The way you are presenting things comes out and rubs wrong to people , you call Ryzen 1 bad at gaming , yes the 8xxx series from Intel could squeeze more fps and 9th but that does not mean the Ryzen 1 can not game .... What you describe about "every game being CPU bound" translates to rainbow six running on 600 instead of 200 fps , sure thing it does mean lower latency but we are talking values of 2-3 milliseconds except if you are 1337 god like mlg player .... Then you can not get advantage of that either . Ultimately you mocking people that play cinebench all day is not wrong the number result has no actual use other some vague comparison. On the other hand playing on 400 fps instead of 200 is also as useless of a resault other than the satisfaction that your tweaking gave you 20 extra fps the practicality of measuring the fps is LITERALLY the same as cinebench !
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
You'd probably drop down to 35fps with 2600X in Watch Dogs Legion at some places. And 250fps in CS:GO? More like 50fps in Danger Zone Sirocco map. The worst cases count much more than useless fairweather benchmarks. Zen 1/+ was just bad for gaming vs. Skylake, and even Zen 2 still required well optimized RAM/IF for some more thread-heavy games. Really can't say nizzen would be wrong...
data/avatar/default/avatar35.webp
nizzen:

Almost every new game is cpu bound, if you have good enough gpu. You are talking about slow gpu that are gpubound in every game. That's the difference. Not everyone playing games in 4k 60hz with v-sync on 😉 Maybe everyone with Ryzen 1, 2 and 3 series 😛
No. 1440p Apex legends Vsync off = 200-300fps 6900xt 100% load. Ryzen 1700 3700mhz 40-50% load. Cyberpunk 2077 70-110fps 6900xt 100% load. Ryzen 1700 3700mhz 40-50% load. The more modern the game, the better it spreads CPU load to more cores, so it is the opposite, old games need faster CPU boost clocks because they do not spread the work out, as good as new games do. Look at DOOM Eternal, maybe the best engine ever made (until maybe Unreal 5 takes over), but with DOOM just use anything from a 5600 or 12400k and up and it would be difficult to spot any difference in a blind test. I tend to just let older games hit 1440p 165fps (monitor max) and enjoy the 700 rpm GPU fan silence instead. I am sure the less CPU overhead on a AMD GPU helps me a lot though.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
aufkrawall2:

You'd probably drop down to 35fps with 2600X in Watch Dogs Legion at some places. And 250fps in CS:GO? More like 50fps in Danger Zone Sirocco map. The worst cases count much more than useless fairweather benchmarks. Zen 1/+ was just bad for gaming vs. Skylake, and even Zen 2 still required well optimized RAM/IF for some more thread-heavy games. Really can't say nizzen would be wrong...
Your guys' memory of Ryzen 1st gen is weirdly misremembered, like...intentionally misremembered. But hey, prove me wrong. Or just stop making crazy assertions, one or the other. P.S. I have a Ryzen 1600 on a B350 motherboard with dual-channel DDR4 2933 ram and an RTX 3070 and running on a 1440p 144hz monitor, and i can tell you right here and now, factually, you are not going to be able to "prove me wrong", because nothing you've said holds water. And before the "Well that's 1440p!" um..that's not how that works. If the numbers are already massively over what you are claiming at 1440p, it'd just get worse for you the lower the resolution..and if you wanna claim the other way, 4K, well, you're digging yourself even more of a hole since the CPU barely matters at that resolution.
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
Or you are just too lazy to click the 1st YT video when searching for "watch dogs legion ryzen 2600", it runs like crap: [youtube=yO_CUiIFaY8] With RT on, it gets even worse. 8700K with high RAM clock kills it, no matter how much you don't want it to be true.
https://forums.guru3d.com/data/avatars/m/263/263205.jpg
nizzen:

You hadn't bought it anyway, even if it was 100% uplift 😉 You love 3 series Ryzen too much 😀
I wouldn't say I love it. First time building with an AMD CPU and it was decent, but nothing super special. Holding out to see how that 5800x3D is and if they are actually going to produce them in enough quantity to attain. The new Intel series is pretty legit though. I'd consider upgrading to it, but I think I'm going to wring some more life out of this X570 board. Probably go 5900x or 5800X(3D). Happy that Intel got over that 14nm hump though. Lots of good options on both teams right now.
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
nizzen:

Fps per watt is VERY good too with fast memory in games. Noone care about Cinebench powerdraw any more. We have Threadripper for that game 😛
So True...:p
Bench1.png
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
nizzen:

50% slower in minimumfps compared to 8700k in cpubound games. That is BAD and actual very slow for it's price point. PS: I had 1700x/1800x/8700k and 9900k at the same time. Gaming performance was sad, compared to 8700k with fast ddr4. But if you compare 1800x with fast memory and 8700k with slow memory, the difference was less. Too bad for Ryzen 1 series that 3200mhz was pretty much the max. 8700k was running with 4700mhz memory 😀
Yup, built a PC for my friend back in 2017 with Ryzen 1700 + 32GB RAM + 1080Ti (quad ranks only run max 2666MT/s) and it was slower than my 8700K + 32GB RAM + 1080Ti (quad ranks 3600MT/s easy) by 30-40% in PUBG, even at 1440p.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
aufkrawall2:

Or you are just too lazy to click the 1st YT video when searching for "watch dogs legion ryzen 2600", it runs like crap: [youtube=yO_CUiIFaY8] With RT on, it gets even worse. 8700K with high RAM clock kills it, no matter how much you don't want it to be true.
Did you really just pair a 2060 with a 2600 youtube review and then try and claim it's worse with RT? Sorry bud, as i said you have a losing battle, i already know what a 1600 with a RTX 3070 can do in watch dogs legion at 1440p and it doesn't come close to your assertions that it's 1st gen ryzen problem. Show me a review that isn't reviewing the RTX 2060 while you claim it's the 2600 that is being reviewed. Because yes, a RTX 2060 in legion, especially with Ray tracing, is not that great of a card for watchdogs legion. Otherwise lets just pair an Intel 12900KS with an GT 710 in watchdogs legion and lets just claim the 12900KS is the worst CPU for the game ever if that's the way we're going to go about it.
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
We really don't need to continue talking when you can't make sense of 80% GPU utilization.
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
aufkrawall2:

We really don't need to continue talking when you can't make sense of 80% GPU utilization.
Bottleneck somewhere...? In the context of a PC, a bottleneck refers to a component that limits the potential of other hardware due to differences in the maximum capabilities of the two components. A bottleneck isn't necessarily caused by the quality or age of components, but rather their performance. :D
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
aufkrawall2:

We really don't need to continue talking when you can't make sense of 80% GPU utilization.
The difference between what you are saying/relying on and what i am saying/experiencing, is the fact you're relying on some youtube video, whereas i have the actual hardware i'm talking about, so, here we go: Bare in mind: I do not have a 2600, i have a 1600. As well, i am not overclocking the 1600 in any way. I do not know if the 2600 he has is running 3200mhz memory, though i suspect it would be, i am not. I did every test exactly the same settings as he did, but i only did no DLSS/ray tracing and the DLSS/ray tracing options No DLSS/Ray tracing from youtube with 2600 OC'd to 4.0Ghz and 2060 https://i.imgur.com/CDIwJRO.jpg No DLSS/Ray tracing, from my PC with 1600, no OC, RTX 3070 and 2933 ram https://i.imgur.com/8iwdpe2.jpg DLSS/Ray tracing enabled at exactly the same settings as the video DLSS/Ray tracing from youtube with 2600 OC'd to 4.0Ghz and 2060 https://i.imgur.com/cJnFK7S.jpg DLSS/Ray tracing, from my PC with 1600, no OC, RTX 3070 and 2933 ram https://i.imgur.com/9oaUeep.jpg Now, especially there, not a huge difference in max, average but big difference in minimum, yet is a lower end processor that isn't OC'd and look at that, the GPU isn't loaded 100%. Now just so we are clear, i'm not saying ryzen 1000/2000 was the best for gaming, never did i say that. Nor did i say the 8700k was worse off. I very clearly stated that it's not as bad as you guys seem to want to remember it being. I see posts and posts about the 8700k having performance issues as well, and pictures showing 100% utilization of the CPU in 1440p as well, Watch Dogs Legion is not really a great example Heck, in each of the pictures i provided from my computer except for the last, the GPU useage wasn't 100%, and yet i did test every single DLSS setting and had ZERO difference in FPS, ray tracing on or not, so i'm pretty certain the CPU is bottlenecking it, and yet, still getting better performance then the youtube video you provided, still getting better performance then your original statement. And then you threw in there CS:GO and 50FPS on Danger Zone Sirocco, these are on the RTX 3070 and 1600, same as before https://i.imgur.com/ypQJYme.jpg https://i.imgur.com/sa69Q6N.jpg https://i.imgur.com/GIgSGor.jpg Max CS:GO settings, 1440p. Never once did it get anywhere close to 50fps. Sometimes, rarely, i saw it hit somewhere in the 90s, walked all around, heck i was even running the server. Again, and again, making this super clear: At no point am i saying the 1000/2000 were amazing gaming CPUs, or that they were better then the competitors, my 5900x with my 3080 and 3200Mhz ram gets near double this performance in CS:GO. How much of that was the increase in ram speed, RTX 3070 vs 3080, and 1600 vs 5900x, who knows lol All i'm saying is it's not as bad as you and nizzen seem to remember it being, especially compared to the competition of its time. Just for fun, i did the RTX 3080 and 5900x in watch dogs legion as well, and the results are...interesting. This is compared to the very first two pictures, with no DLSS/Ray tracing, all settings set the same as the youtuber https://i.imgur.com/8VPy6Xn.png For one, the max 203 FPS is bogus, that spike..who knows what happened there. But also, 99% GPU load even though much faster CPU. Only a 10 FPS increase in minimum, but good increase in average, but how much of that is the CPU vs the GPU, etc. The more and more i look around, see others results, the more and more i realize: Watch dogs legion is probably not a good example to ever give for performance differences between CPUs and GPUs, it's not reliable. Heck even the VRAM useage is different on all of them, even though the settings are the same, 5.41 on the youtube 2600/2060, 4.56 on the 1600/3070, and 5.72 on the 5900x/3080... ????
https://forums.guru3d.com/data/avatars/m/277/277878.jpg
There are dozens of good games available, why are you guys focusing so much on Watchdogs Legion? I don't understand. It have been brought many times, not just in this thread.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
I suspect the average KS buyer only needs half a percent uplift to justify his decision
https://forums.guru3d.com/data/avatars/m/291/291873.jpg
Legion was such a disappointment to me on PC I threw it on my PS5 and there it has stayed, problem solved.