Core i9 13900K DDR5 7200 MHz (+memory scaling) review

Memory (DDR4/DDR5) and Storage (SSD/NVMe) 369 Page 1 of 1 Published by

Click here to post a comment for Core i9 13900K DDR5 7200 MHz (+memory scaling) review on our message forum
https://forums.guru3d.com/data/avatars/m/270/270792.jpg
cucaulay malkin:

Without it we have people claiming that ram speed doesn't matter. I don't blame them though, they're being misled by reviews.
There is a difference to be measured, sure, but if it matters or not is subjective, an opinion.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
you're free to be in as much denial as you want, but I've seen ram speed matter in cpu limited scenes since 2500K days. played the whole far cry 3 on ddr3 1066 no problem on 2500K 4.7G and then there was this Vaas fight scene near the end of the game and it kept dropping into 40s. I thought my card was broken, tried a hunderd times with same result (definition of insanity, right ?) until someone on ocn.net told me to get a 2133 kit saying they saw the same in tomb raider. Got a 2133 kit and boom, back to 60fps in same location. same thing in fc4 outposts/fortresses. If ram speed/latency didn't matter, 5800x3d wouldn't beat the 5800x. It's all thanks to faster data swapping thanks to on-cpu memory with high bandwidth and low latency. same thing happens with ddr, just to a lesser degree.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
If you only have high-end hardware you don't notice that even if it happens, would you tell 270 fps from 220 fps ?
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
13600k will still produce a lot more fps than 6900XT needs at 4K, even with ddr4. There's a benefit to running ddr4 too, you can use gear1 on 12/13 gen.
data/avatar/default/avatar03.webp
Carfax:

Most games are GPU limited yes, but many of the newer ones are becoming more and more CPU limited thanks to things like ray tracing which loads the CPU with BVH calculations, as well as higher levels of simulation. You see this in games like CBP 2077, Spider-Man etcetera where memory bandwidth can boost performance quite a bit when the game isn't limited by the GPU.
How do you get Cyberpunk 2077 CPU limited, 1080p medium or low settings? I went from a Ryzen 1700 (3700mhz) and 3200 memory to a 5800x (4900mhz singlecore boost) and 3800 memory and got 5 FPS (6%) more at 1440p with a 6900xt. I admit that a 6900xt is not that special anymore, but I doubled the cpu performance and got very little game performance from it. Spiderman is so broken that 800x600 ultra settings is less FPS then Doom Eternal 4k ultra nightmare settings, both with a 4090.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
must have been gpu limited in that cp2077 run. It is a very gpu heavy game after all. Maybe you tested the benchmark or a gpu heavy location. it happens, a modern 8/16 cpu with half decent memory should still produce very good fps. 8c is the standard these days, even for value systems.
https://forums.guru3d.com/data/avatars/m/285/285177.jpg
.1% & 1% lows are noticeably higher with faster RAM. Average FPS is higher when running on low settings. The gap would also be greater had an RTX 4090 been used instead.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
schmidtbag:

I'm a bit surprised how little of a difference there was. Granted, the latency was adjusted, and Intel traditionally hasn't had as drastic of a performance impact when upgrading RAM, but I still would have expected more.
I`m not. Intel CPUs always behaved like this regarding memory speeds. But AMD is different and much more sensitive to memory speeds and timings.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
H83:

I`m not. Intel CPUs always behaved like this regarding memory speeds. But AMD is different and much more sensitive to memory speeds and timings.
I'm aware Intel doesn't usually benefit as much from different RAM speeds, but they have traditionally seen some improvement. Granted, many of these tests aren't memory intensive and as pointed out by others, perhaps they're GPU bottlenecked, so perhaps that's really the only reason for the underwhelming results.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
I'm just laughin at the insane amount of clocks they are getting away with these days...CL34-42-42-84... they basically just increase the RAM frequency because that is what uninformed users look at and think "faster megahertz = better" but in order to run faster they have to increase the amount of clock ticks it takes to do everything (the wait time), so overall the frequency increase is cancelled out. Eg. if operation A can be done in 20 ticks at 1000MHz then can run same memory at 2000Mhz and have to wait about 40 ticks instead - still takes same or similar amount of time. I used to have high-end DDR3 RAM from Corsair which ran at CL7-7-7-8 FFS
data/avatar/default/avatar31.webp
Valken:

Predict even with the cheapest DDR5, X3D will rule the top benchmarks.... Just like its 5800X3D brother!
It should have been the XBOX ONE X CPU.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
user1:

my only question is what exactly is "alot" +5%? + 10%?
Depends on what you are comparing my testing to. Gains over what most reviewers use in testing(like 5600-6000MT/s) can be 20% faster. A high-end 8000 DDR5 kit running at XMP will also be quite a bit faster than 7200 Hilbert tested, 10% or so. But most gains are locked behind manual tuning. Add another 10% from tuning primary, secondary and tertiary timings.
data/avatar/default/avatar12.webp
TLDR => not worth as usual. I m amazed about how many people want to find a difference at any cost. Reviews needs to be general. For the normal guy that install a game and plays it, how much is worth spending extra cash in extra fast ram? Nothing. For those kind of people: - facing the game camera in the cpu-worst location - executing specific heavy loads for which they need a metric on ram - running a nes game on a n64 nes emulator but running it on YUZU Those people exactly know what memory does for them and they can move forward on shopping by themselves. I do not think there is value in finding a perf increase at any cost for memory to prove a point we are good at benchmarking, with the risk of misleading buyers
data/avatar/default/avatar29.webp
Agent-A01:

Depends on what you are comparing my testing to. Gains over what most reviewers use in testing(like 5600-6000MT/s) can be 20% faster. A high-end 8000 DDR5 kit running at XMP will also be quite a bit faster than 7200 Hilbert tested, 10% or so. But most gains are locked behind manual tuning. Add another 10% from tuning primary, secondary and tertiary timings.
Hilberts 7200 CL34-42-42-84 is allready pretty damn good. If I chose 8000 XMP I get 1 memory kit to chose from in my country, a very good indication of how elite 8000 already is and the cost is 3 times as high as a 6000 kit or the same price as a new 13900k CPU. A theoretical 10% raw memory speed from 7200 to 8000 is also not going to translate into anything near 10% FPS in games or CPU render speed. Maybe it would theoretically be possible to get up to 10% increased performance if the user has a 1080p 300+hz monitor and a full water loop 13700k - 13900k cpu and 4090, for those other 99% of users it is simply not worth it to spend 3 times as much money on a memory kit.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
TLD LARS:

Hilberts 7200 CL34-42-42-84 is allready pretty damn good. If I chose 8000 XMP I get 1 memory kit to chose from in my country, a very good indication of how elite 8000 already is and the cost is 3 times as high as a 6000 kit or the same price as a new 13900k CPU. A theoretical 10% raw memory speed from 7200 to 8000 is also not going to translate into anything near 10% FPS in games or CPU render speed. Maybe it would theoretically be possible to get up to 10% increased performance if the user has a 1080p 300+hz monitor and a full water loop 13700k - 13900k cpu and 4090, for those other 99% of users it is simply not worth it to spend 3 times as much money on a memory kit.
7200 > 8000 is more than just 10% 'raw memory speed'(assuming you mean bandwidth?). Just comparing HH's AIDA64 memory speed results is much slower than tuned setups. For example let's compare my daily 7800 setup vs 7200 XMP. Read = 108.4 GB/s > 126.47 GB/s = 17 % increase Write = 94.04 GB/s > 124.54 GB/s = 32.4 % increase Copy = 97.68 GB/s > 122.4 GB/s = 25.3 % increase Latency = 62.8 ns > 51 ns = 23.1 % decrease Just 600 MTs increase shows huge gains over a 7200 MTs XMP setup, which is already way faster than what most reviewers test with (usually 6000 kits). On the extreme side, people using something like 8800 Tuned setups see bandwidth over 140 GB/s read/write and latency in the high to mid 40 ns range. It's important to note not all games are going to benefit from memory. Slow pace singled player games with high fidelity graphics are typically GPU bound. If your GPU usage is pegged at 99% than no CPU OC or memory OC is going to bring more FPS. But when your GPU usage is not maxed out, then you have a bottleneck elsewhere in the system such as CPU IPC or memory bandwidth/latency. It's important to note that some games that are memory bound are bound by latency and not bandwidth or vice versa. In those games that are memory bound, tuned setups can show big gains. And as for value, of course 99% of users are not going to be using 8000 kits nor should they. In most cases, their GPU is going to be the limiting factor. A 3060 is not going to need extremely fast memory to see 99% GPU usage in most games. But those that are using a 4090 + 13900K faster memory will be much more beneficial to them.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
geogan:

I'm just laughin at the insane amount of clocks they are getting away with these days...CL34-42-42-84... they basically just increase the RAM frequency because that is what uninformed users look at and think "faster megahertz = better" but in order to run faster they have to increase the amount of clock ticks it takes to do everything (the wait time), so overall the frequency increase is cancelled out. Eg. if operation A can be done in 20 ticks at 1000MHz then can run same memory at 2000Mhz and have to wait about 40 ticks instead - still takes same or similar amount of time. I used to have high-end DDR3 RAM from Corsair which ran at CL7-7-7-8 FFS
Then create a memory standard that does a better job.
data/avatar/default/avatar18.webp
geogan:

I'm just laughin at the insane amount of clocks they are getting away with these days...CL34-42-42-84... they basically just increase the RAM frequency because that is what uninformed users look at and think "faster megahertz = better" but in order to run faster they have to increase the amount of clock ticks it takes to do everything (the wait time), so overall the frequency increase is cancelled out. Eg. if operation A can be done in 20 ticks at 1000MHz then can run same memory at 2000Mhz and have to wait about 40 ticks instead - still takes same or similar amount of time. I used to have high-end DDR3 RAM from Corsair which ran at CL7-7-7-8 FFS
Its not that simple. Latency often means when something starts, not how long each individual step takes.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
amazing there are people really comparing this against ddr3. this kit has 320% higher bandwidth than 2400 c11 ddr3 (108,5 GB/s vs 34,3GB/s) for 38% added latency (62ns vs 45ns). all it takes to figure it out is to be able to read the numbers in the aida memory section. why are people so lazy and ignorant ? I was a critic of ddr5, still am for people who upgrade budget systems, but at this point when 6000 c32 hynix kit of 32gb can be found for 179 eur I'd buy it myself for a new build.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
even comparing price v. price 6000 c32 vs 4400 c19 32g (~180eur) 30% higher bandwidth for 25% higher latency. I'd prefer the ddr4 but you gotta remember that's really the ceiling for value oriented ddr4 whil2e ddr5 is only picking up pace, we'll be seeing 7000 c32 kits in this price range in a year's time. By the time ddr5 is as mature as ddr4 is now, it'll smoke it. compare this kit (7200 c34) against 3600 c17 ddr4 and you get 2.14x the bandwidth for 1.48x latency.
data/avatar/default/avatar11.webp
I've been waiting for a DDR5 clock speed/latency comparison, my system is essentially bottlenecked for 128GB DDR5 where I have to make one of the following two choices with my RAM kits (GSkill Trident Z5 4x32GB DDR5 6000mhz CL32 sticks): -128GB DDR5 @4800mhz CL32 (or CL40 default for stability - not sure what it's set at right now from my builder) -64GB DDR5 at full speed (6000mhz CL32) Which would you choose for now for performance? I do tend to run CFD applications that require over 64GB RAM but I can use another workstation for the time being until a future BIOS update resolves DDR5 issues. The priority is essentially 4k120/1080p500 gaming, multitasking (heavy Chrome tab use) and trading charts/applications, high bandwidth music playback (32bit/384khz), music production and editing. For reference: the rest of my build is an i9-13900KS, 4090 Suprim X Liquid, Asus ROG Maximus Extreme Z790 motherboard (a future BIOS update is what would fix my dilemma assuming 64GB DDR5 sticks don't come out by then), EVGA 1600W SuperNova P2 80+ Platinum, 2x 4TB WD SN850X NVMe SSDs, Corsair H170i Elite 420mm CPU cooler w/LCD (I wanted the Ryujin II), Cooler Master HAF 700 Evo case, Windows 11 Pro, Asus ROG PG42UQ OLED 4k138hz 16:9 display. The absolute difference in RAM latency is 10.7ns (6000mhz/C32) vs 13.3ns (4800mhz/C32) vs 16.7ns (4800mhz/CL40) if my calculations are right. That's a 25-56% increase in latency with the slower 4800mhz default 4 DIMM speeds over the full speed option with half the RAM. Any input is appreciated, thanks!