AMD Ryzen 7 5800X3D -1080p and 720p gaming gets tested

Published by

Click here to post a comment for AMD Ryzen 7 5800X3D -1080p and 720p gaming gets tested on our message forum
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
data/avatar/default/avatar25.webp
nizzen:

4266c14 gear 1 capable ddr4 are more expensive than ddr5 😛 Even that good imc is hard to find. I'm using some cheap green dell hynix ddr5. 7000c30 easy on air. My g.skill 6400c32 is twice the price, and are just abit better. Maybe 7200 is ny imc wall for 100% stability
The price difference between the G-skill 6400 CL32 DDR5 and a DDR4 G-skill 4000 CL18 is something like 3 times or in other words, it is possible to buy the DDR4 4000 32gb kit, a cheap AM4 motherboard and a 5800x for the same money as the G-skill 6400 cl32 DDR5 kit alone. Get a DDR4 4000 kit and downclock it to max AMD 1:1 speed and put the 400€ saved in the pocket. By the way, what did the cheap Dell memory cost? You have mentioned multiple times that it was cheap, but I have newer seen the price or partnumber.
https://forums.guru3d.com/data/avatars/m/126/126739.jpg
Sheesh... A lot of back and forth in this thread. I don't really care who is the fastest, but more so impressed with the performance lift that comes from the L3 cache alone. Means going forward, we can see big gains from a big cache for certain workloads, vs boosting power/thermals to improve IPC for the same result.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
rl66:

But AMD complete system is less expensive.
That doesn't really matter. The CPUs themselves, which are being compared, aren't technically comparable. The 12900KF is 200 bucks more than what the 5800X3D is expected to cost. Since this is a comparison that totally ignores the price, I said the memory shouldn't be comparable either; it should be the fastest model reasonably available. Of course the total memory size should be the same so that the games don't behave differently.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Venix:

In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
Thing is that people test such a low resolutions to test the cpus then crank up the details where any significant difference diminishes.
https://forums.guru3d.com/data/avatars/m/287/287887.jpg
nizzen:

Search "agesa whea" problem 😛 I'm on my #22 bios on X570 dark Hero... Stable now, but had a "few" hickups with 5900x and 5950x... 11900k was 100% stable from first bios.
I do know of the problem but ill go on a limb here and say that it was exacerbated a lot by low IQ Reddit users. I have 5 of my friends who have 5000 series CPUs and do not have that problem, granted they just update their bios, click XMP and game. I won't defend AMD for not being able to solve or know about this problem with their products but I'm old enough to know that every time companies reinvent the wheel there are going to be teething problems. You either haven't brought as much hardware as I or are been deliberately ignorant. Your 11900k is the same Fing ring bus CPU that intel has been refining for over a decade and which allowed them to Fing rip customers off. I want to also add that from the original 2600k all Intel did was improve the clocks, add USB ports and remove or add pins to the sockets so you rebuy the same CPU every two years. I wouldn't expect you to have any problems with an Intel CPU from 2600K to an 11 series as it is all the same with more cores.
https://forums.guru3d.com/data/avatars/m/230/230258.jpg
~AngusHades~:

Sounds like a You problem.
Yeah right? Using r5 2600 like forever... no hardware related issues. Or even any major software issue
data/avatar/default/avatar11.webp
Venix:

In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
Completely agree with this. A lot of game testing in general revolves around resolutions no one is playing at given the price of the hardware being tested and games that have 1% lows over 120 FPS even on midrange hardware. "This $4000 systems plays game X at 547 FPS at 720p." <- wow, such critical information. If a CPU/GPU is top of the line, 1440p and 4K are really all that matters. Games that have 1% lows above 120 FPS on 3 year old midrange hardware aren't even worth testing.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
I think my 3733mhz CL15 is plenty fast with tight 35-36ns latency for years to come 😛
goodrun.jpg
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
mattm4:

Sheesh... A lot of back and forth in this thread.
But, but that's why I love this place!...(now where's my coffee?)
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Venix:

In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
We need to test at lower resolutions to see the real difference between CPUs, otherwise tests are almost useless. If we test CPUs at 4k, for example, performance differences are going to be mininal, giving the (false) impression that all the parts are equally fast. Anyway, the results seem very good. I wonder if it´s possible to test the CPU with 2 and 4 cores disabled, to see how much the extra cache really matters. And also to see how more cores impact performance. I need Guru`s 3d review!
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
H83:

We need to test at lower resolutions to see the real difference between CPUs, otherwise tests are almost useless.
absolutely this is why we talk of some games being CPU bound, and where frame rate consistency is best measured. a lot of folks talk of 1% lows and even 0.01% lows. but frame to frame is the best analysis of gameplay as it shows (esp Win 11) scheduling latencies. i could care less about hypothetical max frames @ 4k as only 4-6 GPU's can even get 120Hz + @ 4k across a broad variety of games. what i'm getting at is the only people who do not see 120+ fps @ 4k as a rhetorical exercise are few and far between numbering in the thousands, while everyone else numbers from the tens of thousand to the millions. and sub 4k is where the demons of CPU's live. AND win 11 is still a major factor causing fps drops (from win 10 same equipment), large latency spikes between frames, and other related issues at 1440p too not just 1080p.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
mohiuddin:

Wow awesome performance. But should have to let the alderlake stretch its performance to its full potential using ddr5 ram with tight timing .
But then it's not an apples to apples comparison. Why not just swap out the GPU for a faster one? Why not overclock the AL? It's a CPU benchmark - you want to keep things as similar as possible except the CPU itself. That being said, knowing temperatures would be good, because a good CPU performance test is one where neither CPU is thermally throttled.
barrybondz:

As someone who owns a 12900K and has ran tests on the witcher 3, border lands 3, shadow of tomb raider. These results are way off from my numbers. I literally just posted a SS of settings for settings of 720P and obliterated the 5800x3d results. If these numbers are accurate, I am not impressed at all.
Are you using the same RAM, GPU, and clock settings? Because those will make a difference.
Venix:

In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
The point of 720p is to make the CPU the bottleneck instead of GPU. Otherwise yeah, it's stupid, because once you get past 144FPS, there's really not much of a noteworthy benefit of going faster. Only a small handful of people can actually take advantage of framerates higher than that, regardless of whether they can see the difference.
data/avatar/default/avatar39.webp
schmidtbag:

But then it's not an apples to apples comparison. Why not just swap out the GPU for a faster one? Why not overclock the AL? It's a CPU benchmark - you want to keep things as similar as possible except the CPU itself. That being said, knowing temperatures would be good, because a good CPU performance test is one where neither CPU is thermally throttled. Are you using the same RAM, GPU, and clock settings? Because those will make a difference. The point of 720p is to make the CPU the bottleneck instead of GPU. Otherwise yeah, it's stupid, because once you get past 144FPS, there's really not much of a noteworthy benefit of going faster. Only a small handful of people can actually take advantage of higher framerates, regardless of whether they can see the difference.
When you test CPU, take away other bottlenecks like GPU and Memory. Memory is a huge bottleneck in many games with 100+ fps, so why not take away most of this bottleneck too?
https://forums.guru3d.com/data/avatars/m/270/270792.jpg
nizzen:

So you are the Amd fanboy here? 🙄 I'm just a performance fanboy. Don't care about color or name. Neither should you 🙂
Based on what you say, I notice at least a preference, and that's ok, you don’t need to be ashamed of it. You seem to care too much about your choices being seen as the best ones, because you “clearly know better”, that’s kinda different from just having an opinion. In my case, despite not being decisive, I have a little preference for Intel stuff. In my house we have 5 computers, only my main rig is AMD, but it could be all AMD (based on cost/benefit, opportunity etc.).
data/avatar/default/avatar33.webp
Games have never been a good test for a CPU. However, because 5800X3D is targeting gamers, how should we compare them? At 1440p and above, all of them (from 5700X up to KF) are going to be GPU limited and will produce around 5% AVG FPS variance.
data/avatar/default/avatar19.webp
nizzen:

When you test CPU, take away other bottlenecks like GPU and Memory. Memory is a huge bottleneck in many games with 100+ fps, so why not take away most of this bottleneck too?
Completely not wrong but I understand the case to use the SAME ram kit. In an ideal world, we would have the SAME fastest RAM kit to test both CPUs in gear 1 or 1:1 IF but NOT all IMC are not created equal.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nizzen:

When you test CPU, take away other bottlenecks like GPU and Memory. Memory is a huge bottleneck in many games with 100+ fps, so why not take away most of this bottleneck too?
That's totally fine, so long as BOTH CPUs are getting the memory upgrade.
data/avatar/default/avatar16.webp
schmidtbag:

That's totally fine, so long as BOTH CPUs are getting the memory upgrade.
Yep, that is why I compare max tuned 5950x vs max tuned 12900k at home. 5950x has max 3866c14 tuned to 52ns, and 12900k has 7200c30 48ns. Latency wil be a bit lower with "Ghost spectrum" win 10 vs Standard win 11 I use.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nizzen:

Yep, that is why I compare max tuned 5950x vs max tuned 12900k at home. 5950x has max 3866c14 tuned to 52ns, and 12900k has 7200c30 48ns. Latency wil be a bit lower with "Ghost spectrum" win 10 vs Standard win 11 I use.
While there's nothing at all wrong with seeing how much you can push a product to its limits, it's a rather meaningless benchmark because there are so many variables involved that can pretty dramatically change your results. There's no longer an apples to apples comparison once you have a wide variety of things that are finely tuned and perhaps aren't of the same quality silicon. With the tuning you got with your hardware, there are people who will get much better and much worse results with either of the CPUs you have, so there's no takeaway from such a test other than "hmm, interesting". Your results do not prove which is better, they just prove which of your particular samples are better. The kind of overclocking Hilbert does is more meaningful - he OCs the hardware to levels that anyone can achieve, thereby proving how much overhead the products reliably have, which is useful to know. So if he can achieve a 20% overclock with minimal tweaking, that shows a product has a lot of potential. If he gets a 5% overclock with an hour of tweaking, that shows the product is already near its limits. Benchmarks are meant to help people know whether the product works as advertised and how well it compares to the competition under normal conditions. Once you start changing variables to suit one product over another, it's not a benchmark anymore, it's just simply a test.