The 13th Generation Raptor Lake ES CPU from Intel is Benchmarked

Published by

Click here to post a comment for The 13th Generation Raptor Lake ES CPU from Intel is Benchmarked on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

Well not totally useless but virtually useless. That's one of the reasons why I'm one of the people griping about power consumption, because the average user doesn't need more cores and more GHz, so, I'd like the improvements go toward improving efficiency. This is also why ARM is creeping its way into everyday computers, since even with the performance loss of translation layers, it's still fast enough for what most people need. Many of us enthusiasts have been around in the 90s and 00s, where the CPU was everything. It didn't matter what your workload was: a faster CPU meant better performance. A lot of us have clung to that mentality, despite the fact that 6c/12t is likely all we'll ever need for the average desktop PC. A lot of us here aren't developers and don't realize that more cores doesn't necessarily improve performance for desktop applications, because they have to be deliberately designed to be multi-threaded. Many applications actually perform worse when you multi-thread (Amdahl's Law as one example), and applications that have too obvious of a benefit from parallelization are better off being handled by a GPU. As for gamers, I see 20 threads being the highest we'll ever need to go, unless games see some revolutionary change that require more CPU power. I don't see games themselves ever needing more than 16 threads, but the extra threads are useful for those who like to multitask or do stuff like recording/streaming. Of course, there will occasionally be an exceptional game that needs far more resources, but we're talking something pretty extensive and niche. For anyone who needs more than 20 threads, you're strattling the line between desktop and server. If for example you need more than 20 threads or your CPU is a money-maker, chances are, you're going to need more than 32 threads, or could benefit from 64 threads, and so on. In other words, our workloads are split between a finite requirement of threads, where you might as well get the cheapest thing that will get the job done, or, you should just buy the best thing you can afford. There's no longer a middle-ground. That's why I find CPUs like the 12400 so appealing, because it's so cheap for what you're getting yet it does pretty much everything the average desktop PC user would ever need without being a bottleneck.
very well spoken I went for 8/16 mainly because of frametime consistency in games, and because it was too good of a deal to pass, but otherwise a modern 6/12 can easily run everything an average desktop user wants. same for 16g ram. as far as I'm concerned once you have a good 6/12, the rest of the money is better invested in more ssd storage. to put it simple: buy what you're going to benefit from. don't waste your budget on "futureproofing" with i9s or 32gb ram if that will come at the cost of other components like storage space or gpu. as for the 13th gen,it'd be sweet to get a 8p+0e sku like 12400f is 6+0,under 300.
data/avatar/default/avatar18.webp
TLD LARS:

You have multible times said that memory speed is what matters most in games.
Yes, and that's a part of the cpu performance. Even you know that.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Imglidinhere:

No it isn't. They clocked both parts at the same 3.8GHz base clockspeed, no boost. It's maybe 5% better overall and the only multithreaded wins are due to raw gains by adding more E-cores. This is not a good sign for Intel if this is the best they can do. Zen 4 is going to mop the floor with this thing.
you are both right. this is impressive for an ES. but imglidinhere, you shouldn't fault Intel for adding more E-cores (because it's a response to the market) and 5% in an ES with immature firmware is impressive to me as AL only had "leaks" of the more favorable kind only to be B-slapped by (lack of) driver support. however, you are right with Zen 4 mopping the floor with this. Meteor Lake is the real competition for AMD. what Intel is doing is using AL & RL as bridges to a fully "chiplet" based CPU. of course, Intel style with their own processes (which lag behind AMD but Intel will get the node drop advantage over AMD with Meteor Lake).
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
nizzen:

I will test both, so I guess we will wait and see if Zen 4 is going to mop the floor with Raptor Lake. In gamingperformance it will depend on the Zen 4 DDR5, and how high it's possible to run. It wil also depend on Zen 4 3d. Right now Alder Lake can run 7000mhz+ DDR5. Raptor Lake wil run a bit faster. I think it's going to a close race. Let's hope AMD is making ram overclocking fun again. Read; High frequency overclock and LOW ! latency 🙂
dude i'm so with you. i'm also going to run both (but will wait for Meteor Lake for chipset/socket stability...not joking) of course, i'm going to be all excited over "ooh new shiny" when the AM5 comes out. i know Nizzen will have that grain of salt too so other people don't get all lathered up and call people fanboys for a company when they're fanboys for the technology. the thing i'm the most excited about AM5 is the lower power consumption. this way i won't feel like i'm driving a dirty diesel big rig over an Alpine meadow when i OC.
https://forums.guru3d.com/data/avatars/m/285/285177.jpg
Slower than the 12900K in gaming? Well that's lame. On the plus side, once it's released the 12900K will be going for $400 USD.
https://forums.guru3d.com/data/avatars/m/54/54063.jpg
I'm getting worried.
https://forums.guru3d.com/data/avatars/m/234/234996.jpg
Wait for something official from Intel, like H* says this doesn't make much sense.
https://forums.guru3d.com/data/avatars/m/293/293699.jpg
When is Intel planning to launch Raptor Lake?
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
ulyxon:

When is Intel planning to launch Raptor Lake?
if they have all their ducks lined up signs point to Christmas. if they don't they'll do a big CES presentation in January for spring delivery.
data/avatar/default/avatar09.webp
I'm running a 11900kf at 5.4gh all core HT disabled. It's just a gaming PC and it's putting out higher scores then 12900k. I only got the 11900 because Amazon had a MIS z590 carbon ek combo for $300 and the CPU was $360+. I already had everything else and sold my i9 9900k. 11900kf at 4.8ghz matched the same performance of my 9900k @ 5.2ghz overall its 14-19% faster vs 9900k at 5.2ghz
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Typhon Six Six Six:

I'm running a 11900kf at 5.4gh all core HT disabled. It's just a gaming PC and it's putting out higher scores then 12900k. I only got the 11900 because Amazon had a MIS z590 carbon ek combo for $300 and the CPU was $360+. I already had everything else and sold my i9 9900k. 11900kf at 4.8ghz matched the same performance of my 9900k @ 5.2ghz overall its 14-19% faster vs 9900k at 5.2ghz
this is a Guru move
data/avatar/default/avatar21.webp
Typhon Six Six Six:

I'm running a 11900kf at 5.4gh all core HT disabled. It's just a gaming PC and it's putting out higher scores then 12900k. I only got the 11900 because Amazon had a MIS z590 carbon ek combo for $300 and the CPU was $360+. I already had everything else and sold my i9 9900k. 11900kf at 4.8ghz matched the same performance of my 9900k @ 5.2ghz overall its 14-19% faster vs 9900k at 5.2ghz
Why are you running HT disabled? It used to be worthless or even harmful to performance in games, but now it tends to give much better performance with HT enabled. Like battlefield gets roughly 15% higher fps when cpu bottlenecked with HT enabled.
data/avatar/default/avatar38.webp
Typhon Six Six Six:

I'm running a 11900kf at 5.4gh all core HT disabled. It's just a gaming PC and it's putting out higher scores then 12900k. I only got the 11900 because Amazon had a MIS z590 carbon ek combo for $300 and the CPU was $360+. I already had everything else and sold my i9 9900k. 11900kf at 4.8ghz matched the same performance of my 9900k @ 5.2ghz overall its 14-19% faster vs 9900k at 5.2ghz
I'd love to compare games with you with my 12900k 5400mhz all core and 7000c30 tweaked memory. Maybe you are comparing a 12900k with slow xmp memory?
data/avatar/default/avatar27.webp
Dragam1337:

Why are you running HT disabled? It used to be worthless or even harmful to performance in games, but now it tends to give much better performance with HT enabled. Like battlefield gets roughly 15% higher fps when cpu bottlenecked with HT enabled.
I have 4 BIOS profiles I work with. If you leave a 11900 stock it will not run well. 1st Bios is 5.4GHZ HT off. Give me about 5-10% increase over running it at 5.1ghz all core HT on in games only.
data/avatar/default/avatar08.webp
nizzen:

I'd love to compare games with you with my 12900k 5400mhz all core and 7000c30 tweaked memory. Maybe you are comparing a 12900k with slow xmp memory?
I'm talking about stock your CPU single core is better than mine. My memory is 3600 14-14-14-34. CPU is un modded.