Core i9 13900K DDR5 7200 MHz (+memory scaling) review
Click here to post a comment for Core i9 13900K DDR5 7200 MHz (+memory scaling) review on our message forum
fredgml7
cucaulay malkin
you're free to be in as much denial as you want, but I've seen ram speed matter in cpu limited scenes since 2500K days.
played the whole far cry 3 on ddr3 1066 no problem on 2500K 4.7G and then there was this Vaas fight scene near the end of the game and it kept dropping into 40s. I thought my card was broken, tried a hunderd times with same result (definition of insanity, right ?) until someone on ocn.net told me to get a 2133 kit saying they saw the same in tomb raider. Got a 2133 kit and boom, back to 60fps in same location. same thing in fc4 outposts/fortresses.
If ram speed/latency didn't matter, 5800x3d wouldn't beat the 5800x. It's all thanks to faster data swapping thanks to on-cpu memory with high bandwidth and low latency. same thing happens with ddr, just to a lesser degree.
cucaulay malkin
If you only have high-end hardware you don't notice that even if it happens, would you tell 270 fps from 220 fps ?
cucaulay malkin
13600k will still produce a lot more fps than 6900XT needs at 4K, even with ddr4. There's a benefit to running ddr4 too, you can use gear1 on 12/13 gen.
TLD LARS
cucaulay malkin
must have been gpu limited in that cp2077 run. It is a very gpu heavy game after all. Maybe you tested the benchmark or a gpu heavy location.
it happens, a modern 8/16 cpu with half decent memory should still produce very good fps. 8c is the standard these days, even for value systems.
DirtyDee
.1% & 1% lows are noticeably higher with faster RAM. Average FPS is higher when running on low settings. The gap would also be greater had an RTX 4090 been used instead.
H83
schmidtbag
geogan
I'm just laughin at the insane amount of clocks they are getting away with these days...CL34-42-42-84... they basically just increase the RAM frequency because that is what uninformed users look at and think "faster megahertz = better" but in order to run faster they have to increase the amount of clock ticks it takes to do everything (the wait time), so overall the frequency increase is cancelled out. Eg. if operation A can be done in 20 ticks at 1000MHz then can run same memory at 2000Mhz and have to wait about 40 ticks instead - still takes same or similar amount of time. I used to have high-end DDR3 RAM from Corsair which ran at CL7-7-7-8 FFS
Typhon Six Six Six
Agent-A01
asturur
TLDR => not worth as usual.
I m amazed about how many people want to find a difference at any cost.
Reviews needs to be general. For the normal guy that install a game and plays it, how much is worth spending extra cash in extra fast ram? Nothing.
For those kind of people:
- facing the game camera in the cpu-worst location
- executing specific heavy loads for which they need a metric on ram
- running a nes game on a n64 nes emulator but running it on YUZU
Those people exactly know what memory does for them and they can move forward on shopping by themselves.
I do not think there is value in finding a perf increase at any cost for memory to prove a point we are good at benchmarking, with the risk of misleading buyers
TLD LARS
Agent-A01
Aura89
nosirrahx
cucaulay malkin
amazing there are people really comparing this against ddr3.
this kit has 320% higher bandwidth than 2400 c11 ddr3 (108,5 GB/s vs 34,3GB/s) for 38% added latency (62ns vs 45ns).
all it takes to figure it out is to be able to read the numbers in the aida memory section. why are people so lazy and ignorant ?
I was a critic of ddr5, still am for people who upgrade budget systems, but at this point when 6000 c32 hynix kit of 32gb can be found for 179 eur I'd buy it myself for a new build.
cucaulay malkin
even comparing price v. price 6000 c32 vs 4400 c19 32g (~180eur)
30% higher bandwidth for 25% higher latency. I'd prefer the ddr4 but you gotta remember that's really the ceiling for value oriented ddr4 whil2e ddr5 is only picking up pace, we'll be seeing 7000 c32 kits in this price range in a year's time. By the time ddr5 is as mature as ddr4 is now, it'll smoke it.
compare this kit (7200 c34) against 3600 c17 ddr4 and you get 2.14x the bandwidth for 1.48x latency.
Hyperflux
I've been waiting for a DDR5 clock speed/latency comparison, my system is essentially bottlenecked for 128GB DDR5 where I have to make one of the following two choices with my RAM kits (GSkill Trident Z5 4x32GB DDR5 6000mhz CL32 sticks):
-128GB DDR5 @4800mhz CL32 (or CL40 default for stability - not sure what it's set at right now from my builder)
-64GB DDR5 at full speed (6000mhz CL32)
Which would you choose for now for performance? I do tend to run CFD applications that require over 64GB RAM but I can use another workstation for the time being until a future BIOS update resolves DDR5 issues. The priority is essentially 4k120/1080p500 gaming, multitasking (heavy Chrome tab use) and trading charts/applications, high bandwidth music playback (32bit/384khz), music production and editing.
For reference: the rest of my build is an i9-13900KS, 4090 Suprim X Liquid, Asus ROG Maximus Extreme Z790 motherboard (a future BIOS update is what would fix my dilemma assuming 64GB DDR5 sticks don't come out by then), EVGA 1600W SuperNova P2 80+ Platinum, 2x 4TB WD SN850X NVMe SSDs, Corsair H170i Elite 420mm CPU cooler w/LCD (I wanted the Ryujin II), Cooler Master HAF 700 Evo case, Windows 11 Pro, Asus ROG PG42UQ OLED 4k138hz 16:9 display.
The absolute difference in RAM latency is 10.7ns (6000mhz/C32) vs 13.3ns (4800mhz/C32) vs 16.7ns (4800mhz/CL40) if my calculations are right. That's a 25-56% increase in latency with the slower 4800mhz default 4 DIMM speeds over the full speed option with half the RAM.
Any input is appreciated, thanks!