The Division 2: PC graphics performance benchmark review

Game reviews 126 Page 1 of 1 Published by

Click here to post a comment for The Division 2: PC graphics performance benchmark review on our message forum
https://forums.guru3d.com/data/avatars/m/169/169351.jpg
Bloody hell, 970 has really fallen bellow... That card was equivalent to 290/390/480/580 in performance and now it's looking very weak... I'm wondering how the 4GB versions of the 470/480/570/580 compare, if it's a VRAM issue (Fury, is typically also falling behind in this regard, but it's not too bad in this game at least) but 970, performance is just woeful in comparison. Is the 970 suffering from the kepler effect? Maybe the 980 too?
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Undying:

So that means less texture popin and stutter. That translates in better experience even if you have slower gpu like rx580 compared to 1660.
Yeah that too, stuttering and dips for a online game especially competitive can be quite a problem. Hitching during critical moments can be quite a hassle so no wonder it's so common to dial down settings to near or below min-spec among other advantages this can provide. Things changed pretty quickly after hovering around 3 - 4 GB too, 5 or near 6 and then up to 8 GB or even higher and it's not just cache either but actual data being stored and newer games pushing even above 8 GB particularly if combined with high-res textures or texture pack add-ons and then higher display resolutions such as ultra wide 3440x1440 becoming more supported and popular but also 3840x2160 itself or higher though now we're pretty much requiring a high-end GPU to drive that. (Not helped by the lower number of SLI and Crossfire titles and support here.)
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Only Intruder:

Bloody hell, 970 has really fallen bellow... That card was equivalent to 290/390/480/580 in performance and now it's looking very weak... I'm wondering how the 4GB versions of the 470/480/570/580 compare, if it's a VRAM issue (Fury, is typically also falling behind in this regard, but it's not too bad in this game at least) but 970, performance is just woeful in comparison. Is the 970 suffering from the kepler effect? Maybe the 980 too?
980ti is also slower than rx590 in this game. Kepler is getting old.
data/avatar/default/avatar22.webp
If 970 vram runs out easy fix is to lower shadow or texture resolution most probably. On i7-4770k @ 4.2GHz with 16gigs of 2400MHz cl10 DDR3 and gf 1080 ti OC I noticed huge fps gain after upgrading my driver to latest and changing to dx12 render, 84 fps in ultra settings 1440p, before that in dx11 I`ve got 72 fps. I`ve noticed dx11 ofter utilzes 100% of my cpu while my gpu is at 85-95% and fps goes down while dx12 keeps cpu at max 80% while gpu is 100% non stop and my fps is much higher and stable especially in busy parts of the benchmark when alot happens on screen. ps. after setting ultra details you still can put shadows and reflrections one notch higher, especially shadows on max look noticably better compared to default ultra preset.
data/avatar/default/avatar05.webp
kilyan:

The dx 12 issues i encountered in the beta are still present: occasional micro freezes here and there, and the lighting going crazy. They fixed at least the random crashes
Well same problem you describe are as well in DX11 and as for random crashes being fixed, I had three and my mate in mission two in last hour during single mission.
data/avatar/default/avatar36.webp
Whats the point in eating up all VRAM? Any visual differences? Any differences in frametimes? Or just filling up empty space?
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
Undying:

1660 slower than 590 and 1660ti slower than 1070. Great showing for new gtx turing cards. Overall game runs good on a decent hardware it seems.
That about sums it up. I thought that the 1660ti would handily beat the 590 in this title.
r3nt5ch3r:

Whats the point in eating up all VRAM? Any visual differences? Any differences in frametimes? Or just filling up empty space?
Just fill up all the video cards things...because that's what memory is for amiright? /s
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I find the Radeon VII VRAM usage weirdly high. Maybe some of it is just buffered/cached but if it isn't, I imagine that's hindering its performance.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
schmidtbag:

I find the Radeon VII VRAM usage weirdly high. Maybe some of it is just buffered/cached but if it isn't, I imagine that's hindering its performance.
HH mentioned how performance is uneffected with high vram usage and gameplay is smoorth. I would argue its even smoother than other cards not slower.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Undying:

980ti is also slower than rx590 in this game. Kepler is getting old.
980Ti is slower than the 590 in a few games - TPU's average puts it only 5% at 590 launch. More importantly it's exactly 12% slower than the 1070 in this - which is identical to the TPU average when the 1070 launched. I think the 970 is definitely just VRAM issue. Probably pegging that .5GB partition to death.
schmidtbag:

I find the Radeon VII VRAM usage weirdly high. Maybe some of it is just buffered/cached but if it isn't, I imagine that's hindering its performance.
It's definitely just cache - more and more games are doing this and it's a good practice. Unused VRAM/RAM is wasted RAM. Manage the cache properly and it should theoretically lead to smoother performance at basically no downside.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Undying:

HH mentioned how performance is uneffected with high vram usage and gameplay is smoorth. I would argue its even smoother than other cards not slower.
Unless he somehow found a way to reduce VRAM usage for that specific GPU, there's no way to prove whether the amount of used VRAM has an impact on that GPU's framerate. It's also worth pointing out there's a big difference between higher average framerates and a smoother experience. I could totally see the high VRAM usage (without maxing out the GPU) would improve smoothness, but, depending on how that memory is used, it could lower the overall framerate due to the increased bandwidth.
data/avatar/default/avatar20.webp
It would be interesting to see the Min fps for this game for each gpu.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Only way to know if vram is an issue is to test 2 of the same cards with different vram. Say a Rx580 4gb vs 8gb versions. Comparing different cards with lesser or more vram is useless.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Unless he somehow found a way to reduce VRAM usage for that specific GPU, there's no way to prove whether the amount of used VRAM has an impact on that GPU's framerate. It's also worth pointing out there's a big difference between higher average framerates and a smoother experience. I could totally see the high VRAM usage (without maxing out the GPU) would improve smoothness, but, depending on how that memory is used, it could lower the overall framerate due to the increased bandwidth.
I don't see how the bandwidth would be increased. In fact it should be the opposite as a larger amount of used textures should always be stored and never unloaded where as if it was limited to a lower value it would be more aggressive about clearing the cache and reloading.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
Embra:

Define "destroyed"? 2-4 fps at 1440p & 4k?
Exactly! Lol! I guess some folks haven't moved to high resolution gaming just yet...;)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

I don't see how the bandwidth would be increased. In fact it should be the opposite as a larger amount of used textures should always be stored and never unloaded where as if it was limited to a lower value it would be more aggressive about clearing the cache and reloading.
Again, depends on how the memory is used. If it's strictly just cache/buffer then yes, it will have little to no impact on bandwidth. But I remember back when GPUs were first starting to implement stuff like texture compression in VRAM, where you pretty much got the same level of detail but you could save a LOT of memory, and in turn, reduce bandwidth. This also helped improve performance. So, depending how this GPU is handling its data, if for example texture compression is very "loose", that could saturate more bandwidth and therefore reduce framerate.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
airbud7:

What part of... ""This is an AMD sponsored title so there are of course optimizations in play."" ...do you not understand?
So your idea is that AMD optimized the game for the RX-590 and the nVidia 1070, but not the 1660ti?...;)
data/avatar/default/avatar29.webp
Interesting that my stock Radeon 7 (well it is slightly undervolted) using a 7820X which is worse for gaming than the 9900k, averaged 93fps @1440p with ultra settings using the benchmark. That is a fairly large difference from what is in the charts.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Again, depends on how the memory is used. If it's strictly just cache/buffer then yes, it will have little to no impact on bandwidth. But I remember back when GPUs were first starting to implement stuff like texture compression in VRAM, where you pretty much got the same level of detail but you could save a LOT of memory, and in turn, reduce bandwidth. This also helped improve performance. So, depending how this GPU is handling its data, if for example texture compression is very "loose", that could saturate the bandwidth and therefore reduce framerate.
It's obviously caching it - it's not like the 8GB/6GB cards are filled or exhibiting any lack of VRAM issues in the game. They probably just have it set to fill a % of total ram or hard cap the cache at a specific amount for each card/ram size. Also if you're talking about delta compression that was added in Kepler (announced with Maxwell) and GCN 2 and whatnot - it doesn't actually reduce total VRAM amount, only the bandwidth transferred over the bus.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

It's obviously caching it - it's not like the 8GB/6GB cards are filled or exhibiting any lack of VRAM issues in the game. They probably just have it set to fill a % of total ram or hard cap the cache at a specific amount for each card/ram size.
I wouldn't say it's obvious, just very likely. When you've got 16GB of HBM2, the drivers could intentionally be pretty lax about compression.