AMD Radeon RX 6900 XT review

Graphics cards 1048 Page 33 of 33 Published by

teaser

Final words and conclusion

Final words

2020 has proven to be a year full of challenges. COVID-19 has stopped the world from turning at its normal pace. There is light at the end of the tunnel. This was the final reference AMD review for the year 2020, and what an exciting Q4 it has been. Conclusively, RDNA2 is a real thing, granted with mixed results. Overall the picture painted here is positive, as it is a bit of a beast eh? But the higher up you go in resolution, the faster it'll run out of stamina. 

Performance

Conclusively only three primary elements count, gaming price, game performance, and game rendering quality. Of course, the Radeon RX 6900 XT is a product that ticks the compulsory boxes. First, we do feel the 6800 and 6800 XT offers more value for money, or on NVIDIA's end the RTX 3080, these should be on top of your purchasing list over 6900 XT. That said, this card can run shaded rasterizer games at 4K pretty darn well; it will serve remarkably well at WQHD even with brutally GPU bound games. At Full HD, normally you'll be quite often bottlenecked and CPU limited, but the 6900 XT has the upper hand as AMD's L3 cache brings them additional performance especially in the area. That advantage reverses once you go high up in resolution up to the point it vanishes when you hit Ultra HD (when compared to team green).  

Competition wise you're looking at 3080 up to 3090 levels of performance for the 6900 XT overall. Performance-wise we can safely state that this is a true Ultra HD capable product with current games. For raytracing that however is a different story. AMD grossly underestimated the importance of a DLSS like solution, even it was for competitive reasoning only. Added Tensor cores in hardware on the RTX series will always work out better for NVIDIA, no matter what angle you look at it from. AMD does have DirectML support pending, but that still needs to be run over the compute engine and cannot be offloaded towards some sort of Tensor cores so we predict that the performance hit for AMD would be significantly bigger. This however remains to be a hypothesis until we actually see that working in a version that also will need to offer DLSS2.0 comparable quality to be even remotely called valid. So no matter how we look at it, it'll cost AMD more performance whereas NVIDIA can offload its ML supersampling to the tensor cores. It's the sole reason why they implemented these specific DL/AI cores in the first place. 

Smart Access Memory

Smart Access memory is a feature to keep an eye on as the results have shown IF it kicks in, it is able to boost your framerates, sometimes even significantly. Out of the five games tested only three kicked in, one extremely well; Assassins Creed: Valhalla who absolutely loves this feature. As explained it does come with compromises, as SAM requires that CSM Support is turned off in the BIOS in order to enable above 4G Decode, which will allow Resizable Bar Support (SAM) to be enabled. The problem here is that if your Windows installation is configured as non-UEFI, Windows will be unable to boot from your currently installed SSD/HDD. Most PCs will be configured like that. The only solution is to disable CSM, and reinstall Windows 10 to get this feature-set supported, or perform some really advanced trickery. For our graphics card testing based on X570 / Ryzen 5950X, we now have this feature enabled as standard. As we expect NVIDIA to follow soon as well. Hey, basically it is a feature that can offer free extra performance.


Img_1392

Raytracing and compute

Yes, I mentioned enough of this already I guess. Looking at raytracing, we have to admit that AMD is performing reasonable at best, sometimes close and in line with NVIDIA but often at GTX 2080 Ti / RTX 3070 performance levels. We had hoped for a bit more. So Hybrid Raytracing wise you'll be playing your games at WQHD as anything beyond it runs out of stamina. Here is where ML supersampling could have benefit AMD. What does help them is the Infinity L3 cache, so overall AMD is offering a fun first experience in Hybrid raytracing but we had hoped for a bit more bite. If we look at full path Raytracing, then AMD lags behind significantly as the competition is showing numbers nearly twice as fast. Generic compute performance show a good boost in performance, but all remain average here as well. OpenGL performance is lagging behind. AMD shines in D3D12 and Vulkan, less so on professional workloads. It's a card intended for gaming, so we're okay with this.

Cooling & acoustics

In very stressed conditions, we did hit a close to 37 dBA though it took a while for the card to get there (warms up slowly); that is considered a quiet acoustic level. Depending on the airflow level inside your chassis. AMD gave the temperature limiter a bit more leech it seems, our card ran max 81 Degrees C range temperature-wise under hefty load conditions (depending on the airflow in your chassis). As FLIR imaging shows, the card's top and bottom side show only minor heat bleeding. Overall, we're very comfortable with what we observe. The tradeoff that AMD made here is that the product remains acoustically quite silent. 

Energy

In the previous paragraph, I already mentioned this; your heat output and energy consumption are always closely related to each other as (graphics) processors and heat can be perceived as a 1:1 state; 100 Watts in (consumption) often equals 100 Watts of heat as output. This is the basis of TDP. AMD is listing them at 250 to 300 Watts for the flagship product, which is okay for a graphics card in the year 2020 at this performance level. We measure numbers to be close to the advertised values for the XT, in fact, it was spot on 300 Watts typical, and every card peaks a bit, that value was measured at 322 Watts.

Coil whine

The reference cards do exhibit some coil squeal. Is it annoying? It's at a level you can hear it. In a closed chassis, that noise would fade away in the background. However, with an open chassis, you can hear coil whine/squeal. Graphics cards all make this in some form, especially at high framerates; this can be noticed.

Pricing

Limited availability and nauseating price levels these days for a graphic card are becoming bothersome. I mean, even for us true hardcore PC gamers, it's getting more and more difficult to explain why people should put down this much money to be able to play computer games on a PC. I mean the spread is $999 for the 6900 XT, $649 for the 6800, and $579 for the cheapest 6800 model. Sure you can game at Ultra HD and get 16GB of GDDR6 memory but with the new consoles from Microsoft and Sony in mind sitting at a 500 USD marker for their GPU, CPU, Storage, and hey the console as a package we are getting more and more uncomfortable about the price levels for graphics cards. We do expect AIB cards to be more expensive, as that is a trend as of late.  We'll have to wait and see how that pans out, though, as everything is dependant on the actual volume availability of these cards. But yeah 999 USD.

Tweaking

AMD's 6800/6900 XT series are designed to run high frequencies on the GOPU. our previous reviews have shown that cards can manage 2500 MHz reasonably fast, properly cooled ones with a good ASIC quality could even reach 2700~2750 MHz.  For the RX 6900 XT we'd expect you to reach ~17.2 GBps of effective bandwidth. Of course, increase the power limiter to the max, so your GPU gets (even) more energy budget. Why this huge differential these you might wonder? Well, you can clock the card even at 3 GHz, but when a power limiter kicks in, it'll always bring down that frequency to match your power budget. Results will vary per board, brand, GPU, and cooling (GDDR6/GPU/VRM). Frequency matters LESS these days as your power limiter will be the decisive and dominant factor, lowering that clock frequency to meets its assigned power budget. That's said, we reached a majestic 2650 MHz, however, the second the power limiter kicks in (and that'll be fast) the card will download in frequency again. All that tweaking and extra energy consumption will bring you a max of 2~3% extra performance. We expect better results with AIB cards that hold an increased power limiter threshold. 

Conclusion

AMD has been able to deliver a GPU that offers super-strong shading (rasterizer) performance, but it lacks a bit in Raytracing. If AMD would have had extra hardware onboard similar to NVIDIA's Tensor cores, they could have helped that Hybrid Raytracing support quite a bit. For now, in this series, raytracing is a gimmick you can fool and fiddle with up-to 2560x1440 playable at best, much like NVIDIA's 1st gen RTX was. The thing is though, not everybody is convinced that Raytracing is the way to go. Some have a hard time seeing the difference, as shading has been refined for like 20 years, and became really good at simulating and create 3D scenes. Times are changing though, and slowly I am getting more impress with Hybrid raytraced games that use full scene reflections. Drive around in a car in London with Watch Dogs: legion, and you'll immediately understand why I am saying that. Using Raytracing for shadows in my belief however is just stupid, a waste of much-needed compute performance. Let the rasterizer engine do what it's good at. So that puts me in a tough position, I do feel RT reflection-based gaming is the future, and it will impress. Here AMD is performing somewhat average. But if you don't give a rats toosh about it, then this card is extremely good with rasterized/shaded games. Especially in Full HD and (Wide) Quad HD the card really benefits from the new L3 cache, in Ultra HD however, the card gets more bandwidth limited and the RTX 3090 takes the upper hand. The difference however is 500 bucks, so the 6900 XT is a third cheaper than NVIDIA's best. What doesn't help me rationalizing this conclusion though is that NVIDIA then again offers 24 GB of super-fast GDDR6 graphics memory whereas AMD applies 16GB GDDR graphics memory. Granted, 16 GB is more than sufficient for the next year or two. 

Radeon RX 6900 XT delivers that's a sure thing. Shaded games show excellent framerates on your monitor all the way-up-to Ultra HD. I however have stated this several times now, we do feel that overall graphics cards are too expensive. The cheapest version 6800 costs $579, for less money you can purchase the new premium version Xbox.  This is a good product with a proper amount of graphics memory.. With future games in mind, this will turn out to your advantage as you can quite easily play Ultra HD games without running into VRAM limitations anytime soon. Smart Access memory helps out as well. But as stated that feature is limited to CSM mode in your BIOS. It'll need some time to get more widely supported before it can be considered to be a default option. But if you are compatible, surely it won't hurt you to try it out as in some games it can offer an increase in framerates.  So yeah, $579 for the 6800, $649 for the 6800 XT, and a whopping $999 for the card as tested today. Expect AIB (board partners) to offer products slightly higher with more premium features to be found in cooling and RGB.

The product performs truly great overall at RTX 3080/3090 performance levels but drops a little in stamina at Ultra HD. If you are less interested in what Raytracing and DLSS bring to the table, then the 6800/6900 XT with it's 16GB graphics memory is a recommended choice over the $699 USD GeForce RTX 3080 and that expensive 1499 USD 3090 (not that you can purchase any atm). But if that is reversed for you and you want maximum Hybrid raytracing performance, then AMD's miss is the lack of some sort of hardware Tensor cores and a DLSS equivalent. You can tell I am struggling with what to recommend you as this is a pandora's box situation really. If you think flagship Raytracing is the bull's balls, we'll steer you to RTX 3090. For premium rasterizer performance, however, you can save a third of the money with that 6900 XT. That's a solid choice in my book. A top pick for those that match my recommendations.

Sign up to receive a notification when we publish a new article.
Or go back to Guru3D's front page

- Hilbert, LOAD"*",8,1.

  

Guru3d-toppick

Share this content
Twitter Facebook Reddit WhatsApp Email Print