The 13th Generation Raptor Lake ES CPU from Intel is Benchmarked

Published by

Click here to post a comment for The 13th Generation Raptor Lake ES CPU from Intel is Benchmarked on our message forum
https://forums.guru3d.com/data/avatars/m/283/283844.jpg
Another 11900K.Still a fast CPU
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Well if the rumors of 6ghz are true then it will end up faster than alder in everything .... If I am honest I was expecting bit more.
data/avatar/default/avatar39.webp
bobnewels:

Another 11900K.Still a fast CPU
Nothing wrong with 11th gen if u on water LOL!!
https://forums.guru3d.com/data/avatars/m/283/283844.jpg
Typhon Six Six Six:

Nothing wrong with 11th gen if u on water LOL!!
I get the joke but I did say it was fast .Reality is that it is still slower than 10900K. Benches showing 13900K slower than 12900K looks familiar but you will have people defending Intel on slower hotter CPU's all the time.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I hope for Intel's sake that this is not final. Sure, they save power (TDP), but other than that, I don't see much sense in buying that ES state raptor lake CPU over alder lake
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
The shifting colors of those graphs is disorienting... In any case, if the TDP (and in turn, the actual power consumption) is really that much lower, then I think that decrease in performance is ok, especially if this isn't going to be a top-tier chip.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
fantaskarsef:

I hope for Intel's sake that this is not final. Sure, they save power (TDP), but other than that, I don't see much sense in buying that ES state raptor lake CPU over alder lake
funny i had the exact opposite reaction, but then i've dogged AL from the launch. i'm not all that interested until Meteor Lake (where i'm actually excited)
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
tunejunky:

funny i had the exact opposite reaction, but then i've dogged AL from the launch. i'm not all that interested until Meteor Lake (where i'm actually excited)
Thing is, thinking about the topic for a bit, saving on power might be the thing in the next months / few years, since GPUs hog all that power, and produce all that heat. But then one's back at the start anyway, considering what the rig's supposed to do in the first place, maximum power, or energy efficient.
https://forums.guru3d.com/data/avatars/m/165/165018.jpg
conversely the 5800X3d is faster than a 12900k in certain circumstances as well. So I wonder who will win the war of "circumstance"?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
fantaskarsef:

Thing is, thinking about the topic for a bit, saving on power might be the thing in the next months / few years, since GPUs hog all that power, and produce all that heat. But then one's back at the start anyway, considering what the rig's supposed to do in the first place, maximum power, or energy efficient.
There's another way to interpret what you've said there: In most cases, GPUs are the bottleneck. So, the CPU is almost never going to get maxed out anyway, especially if you're into 4K gaming rather than super high refresh rates. So, you could argue "I'd like to have the extra performance in the event I ever need it" but one could also argue "I'd rather just pay less for a chip that will do everything I need just as well". I've never once bought a flagship CPU and when it comes to gaming, there has never been a time when my CPU was inadequate for the generation of games it was intended for. At least, if the game was well-optimized. When it comes to gaming, CPUs are kinda boring these days. I'd rather spend my money on a better GPU.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
schmidtbag:

There's another way to interpret what you've said there: In most cases, GPUs are the bottleneck. So, the CPU is almost never going to get maxed out anyway, especially if you're into 4K gaming rather than super high refresh rates. So, you could argue "I'd like to have the extra performance in the event I ever need it" but one could also argue "I'd rather just pay less for a chip that will do everything I need just as well". I've never once bought a flagship CPU and when it comes to gaming, there has never been a time when my CPU was inadequate for the generation of games it was intended for. At least, if the game was well-optimized. When it comes to gaming, CPUs are kinda boring these days. I'd rather spend my money on a better GPU.
I never did buy a flagship CPU as such either, but that does not really translate to what I was trying to figure out here. But I might be wrong with my assumptions anyway: What's Alder and Raptor lake? Which one's tik, which one's tok?
data/avatar/default/avatar30.webp
fantaskarsef:

I never did buy a flagship CPU as such either, but that does not really translate to what I was trying to figure out here. But I might be wrong with my assumptions anyway: What's Alder and Raptor lake? Which one's tik, which one's tok?
Intel stopped the tick tocking when 10nm failed to deliver good chips and I do not think they ever picked it up again.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
TLD LARS:

Intel stopped the tick tocking when 10nm failed to deliver good chips and I do not think they ever picked it up again.
Their new CEO announced they are back on tick tock strategy . So this is the tock.
https://forums.guru3d.com/data/avatars/m/277/277878.jpg
For an ES, this is pretty good!
https://forums.guru3d.com/data/avatars/m/229/229075.jpg
Ivrogne:

For an ES, this is pretty good!
No it isn't. They clocked both parts at the same 3.8GHz base clockspeed, no boost. It's maybe 5% better overall and the only multithreaded wins are due to raw gains by adding more E-cores. This is not a good sign for Intel if this is the best they can do. Zen 4 is going to mop the floor with this thing.
data/avatar/default/avatar30.webp
Imglidinhere:

No it isn't. They clocked both parts at the same 3.8GHz base clockspeed, no boost. It's maybe 5% better overall and the only multithreaded wins are due to raw gains by adding more E-cores. This is not a good sign for Intel if this is the best they can do. Zen 4 is going to mop the floor with this thing.
I will test both, so I guess we will wait and see if Zen 4 is going to mop the floor with Raptor Lake. In gamingperformance it will depend on the Zen 4 DDR5, and how high it's possible to run. It wil also depend on Zen 4 3d. Right now Alder Lake can run 7000mhz+ DDR5. Raptor Lake wil run a bit faster. I think it's going to a close race. Let's hope AMD is making ram overclocking fun again. Read; High frequency overclock and LOW ! latency 🙂
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tty8k:

Going on like this the CPU sharade should be totally useless in 10 years. They need to come up with some dramatic changes, or just make it part of a chipset on mobo. Otherwise, they bring almost no benefit to gaming and also other creative tasks where gpu has a lot more to do. And they same goes for memory, frack all difference at 4k gaming or real life apps (except maybe for archiving)
Well not totally useless but virtually useless. That's one of the reasons why I'm one of the people griping about power consumption, because the average user doesn't need more cores and more GHz, so, I'd like the improvements go toward improving efficiency. This is also why ARM is creeping its way into everyday computers, since even with the performance loss of translation layers, it's still fast enough for what most people need. Many of us enthusiasts have been around in the 90s and 00s, where the CPU was everything. It didn't matter what your workload was: a faster CPU meant better performance. A lot of us have clung to that mentality, despite the fact that 6c/12t is likely all we'll ever need for the average desktop PC. A lot of us here aren't developers and don't realize that more cores doesn't necessarily improve performance for desktop applications, because they have to be deliberately designed to be multi-threaded. Many applications actually perform worse when you multi-thread (Amdahl's Law as one example), and applications that have too obvious of a benefit from parallelization are better off being handled by a GPU. As for gamers, I see 20 threads being the highest we'll ever need to go, unless games see some revolutionary change that require more CPU power. I don't see games themselves ever needing more than 16 threads, but the extra threads are useful for those who like to multitask or do stuff like recording/streaming. Of course, there will occasionally be an exceptional game that needs far more resources, but we're talking something pretty extensive and niche. For anyone who needs more than 20 threads, you're strattling the line between desktop and server. If for example you need more than 20 threads or your CPU is a money-maker, chances are, you're going to need more than 32 threads, or could benefit from 64 threads, and so on. In other words, our workloads are split between a finite requirement of threads, where you might as well get the cheapest thing that will get the job done, or, you should just buy the best thing you can afford. There's no longer a middle-ground. That's why I find CPUs like the 12400 so appealing, because it's so cheap for what you're getting yet it does pretty much everything the average desktop PC user would ever need without being a bottleneck.
data/avatar/default/avatar19.webp
tty8k:

@nizzen Run me a 4K test with stock cpu and mem and then both OC (regular crap no LN2 etc) Pick any game you want :p
Why test GPU when we are talking about cpu's? Limited guru 🙄
data/avatar/default/avatar31.webp
nizzen:

Right now Alder Lake can run 7000mhz+ DDR5
Do not expect 7000 speeds Top 60 on https://hwbot.org/ has a couple of people with 7000 or less, that should be telling that 7000 is not normal. Do not tell people that they can just order a kit tomorrow and enter top 60 hall of fame after 1 hour of overclocking. Mem speed varies a lot from board to board and CPU to CPU There is some silicon lotteri going on with the memory controller on Alder Lake, 12600 and 12700 could easily have faster memory controller then a 12900k MB also have different designs, some are specialized for 2 sticks only and others are designed to have less of a penalty with 4 sticks. buildzoid with 12900k and Asus Maximus Z690 Apex and 2 sticks of Hynix was stuck at 6100 speeds because he was not lucky with his 12900k and the Z690 Apex 4 sticks users would be stuck at speeds like 4500-6000 depending on quality.
nizzen:

Why test GPU when we are talking about cpu's? Limited guru 🙄
You have multible times said that memory speed is what matters most in games.