Review: PCIe Resizable BAR Performance Radeon RX 6000 and GeForce RTX 3000 series benchmarks

Published by

Click here to post a comment for Review: PCIe Resizable BAR Performance Radeon RX 6000 and GeForce RTX 3000 series benchmarks on our message forum
https://forums.guru3d.com/data/avatars/m/278/278016.jpg
cucaulay malkin:

I don't know if using intel or amd makes any difference,but will your face be red when it turns out it doesn't.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
framebuffer not large enough to justify SAM before now. plus AMD looking to get any edge over nvidia. memory bandwidth also being a factor, now that we are smashing through 1TB/sec.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
@Hilbert Hagedoorn : Did you by chance record power draw data during tests too? It may be interesting to see correlation between change in fps and change in power draw. Or at least GPU utilization. Does that big fps difference in AC: Valhalla results in increased power draw/GPU utilization. Or is it practically "free" performance? And one extra thing to consider. Since AC: Valhalla shows big difference. Would it be possible to take just one card on each side AMD/nV and test it with PCIe 3.0 SAM OFF/ON? Could be interesting to see if PCIe 3.0 SAM ON delivers better performance than PCIe 4.0 SAM OFF.
https://forums.guru3d.com/data/avatars/m/282/282392.jpg
I'd be interested to see any change in power draw too, good point @Fox2232.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Valhalla doesn't really seem to cause the GPU to draw that much power though it'd be interesting to see a comparison since it could still be a slight increase in power draw as ReBAR looks to be a bit heavier on the GPU though mostly for the few games seeing higher than average gains and whatever is going on with Valhalla since it's ~20% or so and the other big gainers are at up to 10% gains hah. Forza 4, Borderlands 3 and I think Red Dead Redemption 2 in addition to Valhalla. (Guess the whitelist NVIDIA has is a handy way to quickly see which titles scale really well although then there's Watch_Dogs Legion and stuttering as a bit of a exception.)
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
cucaulay malkin:

I don't know if using intel or amd makes any difference,but will your face be red when it turns out it doesn't.
This is an interesting article comparing the 10900K to 5950X with Cyberpunk. And single rank vs dual rank ram. https://www.igorslab.de/en/up-to-17-more-fps-at-1080p-ultra-cyberpunk-2077-in-big-ram-test-round-trip-rate-and-ranks/2/ Intel respond better to higher speed ram 'in this game'. It would be interesting to see the results with other games and with/without REBAR. Its possible there is nothing more to give so it wont make a difference, but what if ...
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Mufflore:

This is an interesting article comparing the 10900K to 5950X with Cyberpunk. And single rank vs dual rank ram. Intel respond better to higher speed ram 'in this game'. It would be interesting to see the results with other games and with/without REBAR. Its possible there is nothing more to give so it wont make a difference, but what if ...
intels always responds a little better to frequency and amd to timings it has nothing to do with whta he said tho
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
cucaulay malkin:

intels always responds a little better to frequency and amd to timings it has nothing to do with whta he said tho
Yeah his comment was not well said, it deserved a rebuttal.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
cucaulay malkin:

intels always responds a little better to frequency and amd to timings it has nothing to do with whta he said tho
it was the otherway around few generations ago.
data/avatar/default/avatar12.webp
Thanks for the interesting article. For the pages where the NVIDIA and AMD cards have separate graphs, it would be nice if at least they'd have the same scale, so it would be possible to visually compare performance of these cards.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
resiseable bar doesn't make any more difference than i.e. hags,bar one game where it's 10-15% best case scenario so best case scenario of best case scenario is 15% it looks like much ado about nothing, you're getting 1 fps when you're below that critical 60 fps barrier,in a couple of whitelisted games where it doesn't reduce performance. 7 games with 6800xt at 1440p - 2.9% difference on average without valhalla it's 1.8% on average so much hype for nothing.and you can't use hags with sam enabled I hear,so you're trading 1% performance gain for 2-3% now that is a breakthrough.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
Rebar has never been about "just the PCIe bus". It's never been a simple matter of setting the bios up for Rebar at all. Had it been that way, that simple, we'd have been doing it for years. What it does is allow for the CPU to access a lot more than 256MB at a time from the onboard GPU ram--across the PCIe bus (PCIe4, the faster the better.) Not only the drivers have to be made Rebar-aware, they have to also use Rebar as it is intended. That takes a lot more than simply rejigging the system bios settings--the entire GPU hardware, GPU bios, and the drivers have to be made to use the rebar--or you get little to nothing from it. It took nVidia ~18 months after AMD to ship its first PCIe4 GPUs, and unlike AMD's SAM, Rebar was never intended by nVidia as a feature in RTX-3k. Additionally, the purpose of a memory cache is to put the most used data where the CPU can get to it the quickest, and that's why AMD built in what it calls the Infinity Cache--in its RX-6k hardware. nVidia has no equivalent atm. I hope that nVidia will copy AMD in this regard and that the two companies could agree on a D3d and/or Vulkan API standard, because in the future things could really pop out of the concept. The people who once said that AMD was simply shooting the breeze because "anybody" could do Rebar by just changing the bios settings have been proven wrong--again--but there's one more element that will come into play. Game engines themselves will need to be written to address > 256MBs of data at a time from the GPU across the PCIe 4bus (or faster), or from the GPU to the CPU--and then we could very well see much more substantial improvements in performance, imo.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Basically this plus things like HAGS, mean that the GPU / CPU subsystem gets rid of a lot of bottlenecks, and they can both access memory space better, including NVMe I/O etc. They seem insignificant now, but they will for certain matter by the mid of this console generation. I cannot see, for example, how the MS Velocity architecture could even work properly without HAGS +ReBAR.
https://forums.guru3d.com/data/avatars/m/166/166706.jpg
cucaulay malkin:

what a strange case 6800 is.I mean overall,not just in sam performance. 18% slower than 6800xt in legion,4% in cp2077
i think guru3d got a bad silicon lottery sample, mine was faster out of the box and overclocked much higher
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Raider0001:

i think guru3d got a bad silicon lottery sample, mine was faster out of the box and overclocked much higher
this doesn't explain the difference.
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
Thanks for this expended test review. Makes me consider reinstalling Assassins Creed: Valhalla. Not a huge performance impact it seems, but in the era of raytracing every little bits of performance counts, glad we get a potential source of bottleneck out of the way.
data/avatar/default/avatar04.webp
Cheers for actually doing a comparable benchmark in 4k. Was annoying seeing benchmarks only in 1080p.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
cucaulay malkin:

this doesn't explain the difference.
It does not. But in theory it may. Different workloads cause different power draw. In workload where 6800 is close to XT, general power draw may be low. While in workload where there is big difference, 6800 may hit TDP wall due to low power efficiency of GPU itself. But unless tested for this specific condition, one can't determine if ti is actual cause.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Fox2232:

It does not. But in theory it may. Different workloads cause different power draw. In workload where 6800 is close to XT, general power draw may be low. While in workload where there is big difference, 6800 may hit TDP wall due to low power efficiency of GPU itself. But unless tested for this specific condition, one can't determine if ti is actual cause.
or the other way round 6800xt may hit the power limit in cp2077 faster than in other games,while 6800 being more power efficient reduces the gap
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
I am an editor at a technology-oriented website with around 15M members and I would have rejected this article for a very simple and basic reason : it repeatedly uses an acronym that is not in common usage without defining it. Curiously, several other uncommon acronyms were defined.