Final words and conclusion
We can certainly appreciate what MSI is bringing to the table with the new SUPRIM X model GeForce RTX 3080. From the ground up, the series has been injected with the best components, more copper in the PCB traces, and improved cooling. The aesthetic overhaul people will either love or dislike. Personally, I find the TRIO a better-looking card, but it is just that, a personal observation. Now, you'd think that with the extra power available, the beefed-up VRM, and the increased Boost clock frequency, that this card would be much faster. Nope. And we stated this many times already, boost frequency matter less these days as the power limiter dumbs down that performance the second your max wattage has been reached. Compared to the TRIO, this card was even a little slower here and the. However, that card was tested before the CTD issues mess, and to bypass that issue, cards have since become roughly 1% slower, and that is exactly the margin this card falls in and explains the differential. Even if this GPU could manage 2100 MHz at defaults, you'd still have to deal with the energy budget in the 370 Watt range; this card will throttle down once it hits that limiter. That's the big denominator and limiting factor for premium graphics cards these days, and as a side effect of it, board partners have a harder time differentiating themselves.
Our performance paragraph is a generic paragraph used on all RTX 3080 reviews as the performance is more or less the same for all cards and brands. Gaming it can do well, with exceptional values. Yes, at Full HD, you'll be quite often bottlenecked and CLU limited. But even there, in some games with proper programming and that right API (DX12/ASYNC), the sheer increase in performance is staggering. The good old rasterizer engine first, as hey, it is still the leading factor. Pure speaking from a shading/rasterizing point of view, you're looking at 125% to 160% performance increases seen (relative) from the similar priced GeForce RTX 2080 (SUPER), so that is a tremendous step. The unimaginable number of shader processors is staggering. The new FP32/INT32 combo clusters remain a compromise that will work exceptionally well in most use cases, but not all of them. But even then, there are so many shader cores that not once the tested graphics card was slower than an RTX 2080 Ti; in fact (and I do mean in GPU bound situations), the RTX 2080 stays ahead by at least a margin of a relative 125%, bot more often 150% and even 160%. Performance-wise we can finally say, hey, this is a true Ultra HD capable graphics card (aside from Flight Simulator 2020, haha, that title needs D3D12/AYSNC en some DLSS!). The good news is that any game that uses traditional rendering will run excellent at 3840x2160. Games that can ray trace and manage DLSS also become playable in UHD. A good example was battlefield V with Raytracing and DLSS enabled, in Ultra HD now running in that 75 FPS bracket. Well, you've seen the numbers in the review; I'll mute now. DXR Raytracing and tensor performance, the RTX 30 series have been received new tensor and RT cores. So don't let the RT and Tensor core count confuse you. They're located close inside that rendering engine, became more efficient, and that shows.
If we look at an RTX 2080 with port Royale, we will hit almost 30 FPS. The RTX 3080 nearly doubles that at 53 FPS. Tensor cores are harder to measure, but overall from what we have seen, it's all in good balance. Overall though, the GeForce RTX 3080 starts to make sense at a Quad HD resolution (2560x1440), but again I deem this to be an Ultra HD targeted product. In contrast, for 2560x1440, I'd see the GeForce RTX 3070 see playing a more important role in terms of sense and value for money. At Full HD, then the inevitable GeForce RTX 3060, whenever that may be released. Games like Red Dead Redemption will make you aim, shoot, and smile at 70 FPS in UHD resolutions with the very best graphics settings. As always comparing apples and oranges, the performance results vary here and there as each architecture offers advantages and disadvantages in certain game render workloads. So, for the content creators among us, have you seen the Blender and V_Ray NEXT results? No, go towards page 30 of this review, and your eyes will pop out. The sheer compute performance has early exponentially doubled one step in the right direction. We need to stop for a second and talk VRAM, aka framebuffer memory. The GeForce RTX was fitted with new GDDR6X memory, it clocks in at 19 Gbps, and that is a freakfest of memory bandwidth, which the graphics card really likes. You'll get 10GB of it. I can also tell you that there are plans for a 20GB version. We think initially the 20GB was to be released as the default, but for reasons none other than the bill of materials used, it became 10GB. In the year 2020, that is a very decent amount of graphics memory. However, signals are that the 20GB version may become available later for those who want to run Flight Simulator 2020; haha, that was a pun, sorry. We feel 10GB right now is fine, but with DirectX Ultimate and added scene complexity and raytracing becoming the new norm, I do not know if that's still enough two years from now.
.cooling & noise levels
Depending on the airflow level inside your chassis, expect the card to hit 80 Degrees C range temperature-wise under hefty load conditions in SILENT BIOS mode. Personally, I am not comfortable with 80 Degrees C all the time. We feel MSI has overshot the commonly accepted value of roughly 75 Degrees C. That 80 degrees C is not an issue for this card whatsoever; the tradeoff advantage is that all of a sudden, this massively TDP beast produces a shy 32 Dba of noise, that's ... nothing. That is amazing, really.
If you are not comfortable with that <80 Degrees C value, you can select the GAMING BIOS mode. Here noise levels will hit roughly 37~38 DBa, which can be heard, but still is acceptable. Now your GPU will hit roughly 65 Degrees C maximum. It's a choice you get to decide. The default configuration for these cards is the silent mode. And make no mistake, the performance is 100% the same. The only thing different is that the fan RPM delta is measured against temperature.
The power draw under intensive gaming for GeForce RTX 3080 remains to be significant. We measured it to be close to the 380 Watt range. This is the tradeoff for a bit more bite in performance for this graphics cared model in particular. Now that is a peak value under stressed conditions. Gaming wise that number will fluctuate a bit. Are we happy with that amount of energy consumption in the year 2020? No, not at all. Will you, as an end consumer, care about it? We dispute that as well. Keep in mind you'll need a power supply with three 8-pin PCIe graphics power headers. We advise a 750 Watt model as the rest of the system needs some juice, and you will want some reserve. You can increase the graphics card power consumption by another ~50 Watts when you open up that power slider. Yes, that's 460 Watts of power consumption just for this graphics card when you flick open all registers manually.
This GeForce RTX 3080 did hardly exhibit coil squeak, much less than the founder card we tested. Is it disturbing? Well, no, it's at a level you can hear it softly if you put your ear next to the card. In a closed chassis, however, that noise would fade away in the background. However, with an open chassis, you can hear a bit of coil whine/squeak.
NVIDIA is pricing the GeForce RTX 3080 $699. The AIB product is deemed and damned to be called the more premium products. And I already told you, that's no longer the case anymore as NVIDA's founder cards are directly competing with the AIB product. In a perfect scenario, I would like to see the AIB product cheaper than the founder edition. That's not the case. This card will be more expensive seen over that founder edition card. The price is currently rated at 879 EUR incl vat (in the Netherlands). This will vary per country and, of course, availability.
The card actually tweaks extremely well for an RTX 3080. We've been able to push the power limiter by another 16%, then added 125 Mhz on the GPU clock resulting in observed boost frequencies towards 2100 MHz (depends and varies per game title/application). The memory was binned as well; never have we been able to breach 20 GHz on GDDR6X stable; we reached a lovely 20.5 GHz. All in all, that brings us an 8% performance premium seen from the reference model.
MSI has a fantastic card here at hand with the SUPRIM X. Despite the ominous naming; it's a product that oozes quality at that hardware level; the hardware really is that good. Stil. I am struggling with it as I do not see a substantial relevance for this product. If I am allowed to make a personal note here, I think perhaps the TRIO is a better-looking product? It performs the same with a 1% differential and likely is cheaper (if you can spot one in the stores). So all the extra's like a new look, aluminum backplate, newly designed cooler, and dual BIOS, is it worth a price premium? We doubt that a little. But it is over-engineering at its best. But I mean, if the card would have been substantially faster than the TRIO X, then yeah, probably. But that's not the case. In the end, though, it's a fantastic product, ranking high up there in features and specification. However, Nvidia's project green light defines that all cards are more or less in that same performance bracket, and that results in a meager 4% additional performance seen over the FE edition, for this, all amped and beefed up product. Make no mistake, it's love and fantastic, but is it worth the highest price premium? We doubt that.
- GeForce RTX 3090 SUPRIM X 24G: 1749.99 USD
- GeForce RTX 3080 SUPRIM X 10G: 899.99 USD
MSI's challenge is the temperature of <80 Degrees C that they allow in silent mode. That one will likely not sit well with consumers; graphics cards should run below 75 Degrees C., And we know it; it's subjective as at 80 Degrees C, that card will be totally fine. It's just that end-users do not like that value whatsoever, and perhaps for a good reason as well. Your alternative is the GAMING mode, which offers a 65 Degrees C experience, but the fans can be heard somewhat, which is not expected from a premium MSI product. I honestly think this is a dilemma, as you have the best of both worlds with the TRIO-X, really. If we filter out the challenges (power consumption and pricing), then we can only acknowledge that the sheer performance this card series brings to the table is anything short of being impressive. The new generational architecture tweaks for Raytracing and Tensor also is significant. Coming from the RTX 2080, the RTX 3080 exhibited a roughly 85% performance increase, and that is going to bring hybrid raytracing towards higher resolutions. DXR will remain to be massively demanding, of course, but when you can plat Battlefield V in Ultra HD with Raytracing and DLSS enabled at over 70 FPS, hey, I'm cool with that. This card, in its default configuration, sits roughly 4% above founder edition performance. Of course, pricing will be everything as the AIB/AIC partners need to complete with an excellent founder edition product.
MSI did a marvelous job here, but in the end, that choice rests at the end-user level availability and pricing. If pricing remains under control at etailers, this card is going to be a hit. However, even if the TRIO X would be 25 bucks cheaper, I know what I would choose (though it is a very personal choice). But hey, good god, what a card.
- Hilbert, LOAD"*",8,1.