Shadow of the Tomb Raider: RTX and DLSS Update

Game reviews 126 Page 1 of 1 Published by

Click here to post a comment for Shadow of the Tomb Raider: RTX and DLSS Update on our message forum
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Food for thought - I think you should realize that the game developers set a target and then start to design what is possible, and in the last stage of development make choices to reach that target. Pretty much what I am saying is that in a certain point of development the teams say, okay what are our goals. I think the software houses will say 60 fps with the best graphics card at Ultra HD is the target, and then they develop in such a manner that it is achievable. So what can we get out of the game engine at maximum while maintaining 60 fps at UHD. I mean if you enable medium quality settings and the software houses would have named it 'ultra mode', you'd have your 100+ FPS at Ultra HD. Would you be happy then? I think the push in graphics quality will always be that 60 fps for devs, and if you want more that's where a notch lower in image quality would serve you.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I have to say I'm impressed that they seem to be the first game where dx12 really shows advantages in performance all over the board. Well done.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
fantaskarsef:

I have to say I'm impressed that they seem to be the first game where dx12 really shows advantages in performance all over the board. Well done.
Only with Turing and AMD.
https://forums.guru3d.com/data/avatars/m/90/90026.jpg
alanm:

Only with Turing and AMD.
Which only shows how fake/crippled was dx12 on pascals. Something like 970 and Has 4gb ram
data/avatar/default/avatar32.webp
Hilbert Hagedoorn:

Food for thought - I think you should realize that the game developers set a target and then start to design what is possible, and in the last stage of development make choices to reach that target. Pretty much what I am saying is that in a certain point of development the teams say, okay what are our goals. I think the software houses will say 60 fps with the best graphics card at Ultra HD is the target, and then they develop in such a manner that it is achievable. So what can we get out of the game engine at maximum while maintaining 60 fps at UHD. I mean if you enable medium quality settings and the software houses would have named it 'ultra mode', you'd have your 100+ FPS at Ultra HD. Would you be happy then? I think the push in graphics quality will always be that 60 fps for devs, and if you want more that's where a notch lower in image quality would serve you.
Excellent point HH! Thank you.
data/avatar/default/avatar02.webp
Dynarush_333:

If I'm being honest that is disappointing for me. 2 years for a generation for 70 fps in 4k. I'd expect more for that kind of money. I think personally the new 60fps is 100fps. I'd take fluidity over 4k, something the consoles should have done as well. 70 fps at any resolution isn't impressive at the high end. This would be no good on a 144hz display. Still, if you use a controller it will be smoother! I find a mouse shows low frames up even more. Gsync smooths frame changes but it can't create an illusion of high refresh rate 🙁 I'd be curious if you ran it at 1440p and see what score!
I think Hilbert pretty much summed it up but I just wanted to add one other point. My 1080ti wasn't getting anywhere near this fps on the settings I'm using. It's a big improvement. This is just a very demanding and beautiful looking game.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
alanm:

Only with Turing and AMD.
Yes that is true, but at least at some point Nvidia users don't have to avoid DX12 anymore, and I think that's a plus.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
GREGIX:

Which only shows how fake/crippled was dx12 on pascals. Something like 970 and Has 4gb ram
It's actually the opposite, it shows that Turing has idle execution resources in DX11. Turing lacks the ability to issue a second instruction from a thread in a single clock cycle. Turing requires two cycles to execute an instruction but it can issue instructions every cycle, including independently to both FP/INT. DX12 allows developers to more optimally schedule these instructions and fill the pipeline. They're basically trading efficiency for flexibility.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
Yeah Dx12 does give better performance, least it still runs fine for me in Dx11.
https://forums.guru3d.com/data/avatars/m/255/255510.jpg
goodly guide. 🙂
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
alanm:

So with Turing, Nvidia finally gets proper DX12 performance (well, at least this title), something AMD had done well with earlier.
With Pascal too, it's showing more consistent frame times and slightly higher fps, but Hilbert only tested the GTX 1070 at 1440p at between 40-50 fps, which is not the right conditions to highlight big DX12 advantage, so I'd expect Pascal to show bigger performance differences at the higher fps levels at lower resolutions or settings, just like is seen with Turing at higher fps. Turing also shows only minimal performance differences in DX12 in the 50 fps zone where it was tested at 4K in this article - an increase from 56 to 59fps. Based on that pattern I'd expect Pascal to get a decent bump from DX12 at settings that allow for 80+ frames per second.
https://forums.guru3d.com/data/avatars/m/90/90026.jpg
karma777police:

I tested the game under Windows 7 x64 and 1080ti performs the same as on Windows 10 under DX12 where DX11 under Windows 10 is broken in performance terms and I noticed that for time now.
And this is why I stick to win7, despite recent mb/cpu change from z97+broadwell@4,1Ghz to z370 taichi and 8086@5Ghz. Which gave me boost like 10+% and OC mem to 4200cl17 another 30% in some titles(especially in low/minimum FPS). Fun thing is I did not had to reinstall my OS, just refresh chipset/usb drivers. Ah, have to redo MS activation too
data/avatar/default/avatar38.webp
Denial:

It's actually the opposite, it shows that Turing has idle execution resources in DX11. Turing lacks the ability to issue a second instruction from a thread in a single clock cycle. Turing requires two cycles to execute an instruction but it can issue instructions every cycle, including independently to both FP/INT. DX12 allows developers to more optimally schedule these instructions and fill the pipeline. They're basically trading efficiency for flexibility.
Which is why they ought to seperate the development of consumer and pro cards. Everything about turing was developed for the pro market, it doesn't favor dx11 gaming (hence smallest performance increase in ages from a new gen), which nearly all games still use. And i find the whole dx12 situation quite troublesome, cause as you say, it allows the DEVELOPERS to more optimally schedule these instructions and fill the pipeline, not nvidia - but do the developers have the knowledge of how to do so, and more importantly, are the developers willing to put the effort into specifically optimize their games for 1 architecture ? I think not, unless they are sponsored by nvidia...
data/avatar/default/avatar40.webp
It's nice to see Vega 56 match the 1070Ti performance in NV sponsored titles and beat the 1080 in AMD sponsored ones.
Netherwind:

I read this game scales super well in SLI and with those 1080Tis you should be able to play at 4K no problem. I'm currently gaming at 3440x1440 and it works better than expected. 4K is off the table though for me.
So get a 2400-2500$ GPU pair to get stable 60 fps in 4K. Nothankyou.
Dragam1337:

Which is why they ought to seperate the development of consumer and pro cards. Everything about turing was developed for the pro market, it doesn't favor dx11 gaming (hence smallest performance increase in ages from a new gen), which nearly all games still use. And i find the whole dx12 situation quite troublesome, cause as you say, it allows the DEVELOPERS to more optimally schedule these instructions and fill the pipeline, not nvidia - but do the developers have the knowledge of how to do so, and more importantly, are the developers willing to put the effort into specifically optimize their games for 1 architecture ? I think not, unless they are sponsored by nvidia...
Check gamegpu for the SotTR benchmarks, where you can see the DX11 and DX12 results. In DX12 both AMD and NV get huge boost compared to DX11.... This is how DX12 implementation should be done.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dragam1337:

Which is why they ought to seperate the development of consumer and pro cards. Everything about turing was developed for the pro market, it doesn't favor dx11 gaming (hence smallest performance increase in ages from a new gen), which nearly all games still use. And i find the whole dx12 situation quite troublesome, cause as you say, it allows the DEVELOPERS to more optimally schedule these instructions and fill the pipeline, not nvidia - but do the developers have the knowledge of how to do so, and more importantly, are the developers willing to put the effort into specifically optimize their games for 1 architecture ? I think not, unless they are sponsored by nvidia...
Well on DX12 most games are inherently optimized for AMD since most of them are being directly ported from AMD hardware on consoles - this benchmark shows that AMD gets a pretty large benefit from DX12. So clearly they are optimizing for multiple architectures which is kind of something you have to do in DX12 regardless to whether its Turing or Pascal or AMD. The difference is with Pascal the best a dev can do is extract parity with Nvidia's DX11 performance, with Turing the flexibility allows them to theoretically go past what Nvidia could ever whip up in the driver with the downside being that DX11 performance isn't as optimal given the hardware. On the flipside there is obviously a cost trade off in both GPU size and power consumption as dedicated hardware scheduling is not only back but the number of dispatch units is now doubled for every CUDA core. This is one of the reasons why 2080Ti has only 20% more CUDA cores but is 65% larger in GPU size. So I guess the question is does the added flexibility in scheduling offset the increased cost/complexity to the GPU vs's a GPU that's just Pascal scaled up? No idea. How large would a Pascal scaled up be? No idea. How much is Turing's cost (2080Ti) attributed to die size vs Nvidia's desire for larger margins? No idea. I slightly agree that I'd like to see the architecture split and continue with a gaming variant lacking the flexibility with an unknown decreased cost but I also admit that I have no idea if Nvidia weighed the cost of doing that decided for some weird reason that it wasn't worth it. Perhaps Pascal just doesn't scale up that well. Perhaps developers were indicating an increased desire for mixed FP/INT workloads in games and Nvidia foresees this architecture being more future proof. Perhaps Nvidia fears Intel's arrival into the GPU space and decided that value-add differentiation with machine learning/RTX was the best way to combat that? Too many unknowns to know why they choose to go this route. I personally don't mind the route, technologically speaking, the only issue is the cost for me. I'm not spending $1200 for 25-30% performance improvements. I'm not spending $1200 for a gamble on whether their value-add features will be in the games I want to play. I personally thought this entire launch was an embarrassment. But the marketing/pricing aspect of Nvidia is a separate thing from the engineering - I still think the stuff they are doing under the hood is really neat. Time will tell if the neat tech pays off or falls to the wayside in the presence of more traditional architectures without all the fancy pants AI stuff and crazy scheduling.
data/avatar/default/avatar35.webp
BReal85:

It's nice to see Vega 56 match the 1070Ti performance in NV sponsored titles and beat the 1080 in AMD sponsored ones. So get a 2400-2500$ GPU pair to get stable 60 fps in 4K. Nothankyou. Check gamegpu for the SotTR benchmarks, where you can see the DX11 and DX12 results. In DX12 both AMD and NV get huge boost compared to DX11.... This is how DX12 implementation should be done.
You speak as if i don't have the game nor any of the mentioned gpu's - clearly you haven't been following the shadow of the tomb raider thread. In situations where i am not cpu limited, the fps is roughly the same for me in dx11 / dx12... but dx12 has much more uneven frametimes. Dx11 https://i.imgur.com/LwH7EpZ.jpg Dx12 https://i.imgur.com/PNoz8De.jpg No doubt that dx12 is alot better for turing than dx11, but who cares about that overpriced POS... and obviously amd favors dx12, as always.
data/avatar/default/avatar38.webp
Denial:

Well on DX12 most games are inherently optimized for AMD since most of them are being directly ported from AMD hardware on consoles - this benchmark shows that AMD gets a pretty large benefit from DX12. So clearly they are optimizing for multiple architectures which is kind of something you have to do in DX12 regardless to whether its Turing or Pascal or AMD. The difference is with Pascal the best a dev can do is extract parity with Nvidia's DX11 performance, with Turing the flexibility allows them to theoretically go past what Nvidia could ever whip up in the driver with the downside being that DX11 performance isn't as optimal given the hardware. On the flipside there is obviously a cost trade off in both GPU size and power consumption as dedicated hardware scheduling is not only back but the number of dispatch units is now doubled for every CUDA core. This is one of the reasons why 2080Ti has only 20% more CUDA cores but is 65% larger in GPU size. So I guess the question is does the added flexibility in scheduling offset the increased cost/complexity to the GPU vs's a GPU that's just Pascal scaled up? No idea. How large would a Pascal scaled up be? No idea. How much is Turing's cost (2080Ti) attributed to die size vs Nvidia's desire for larger margins? No idea. I slightly agree that I'd like to see the architecture split and continue with a gaming variant lacking the flexibility with an unknown decreased cost but I also admit that I have no idea if Nvidia weighed the cost of doing that decided for some weird reason that it wasn't worth it. Perhaps Pascal just doesn't scale up that well. Perhaps developers were indicating an increased desire for mixed FP/INT workloads in games and Nvidia foresees this architecture being more future proof. Perhaps Nvidia fears Intel's arrival into the GPU space and decided that value-add differentiation with machine learning/RTX was the best way to combat that? Too many unknowns to know why they choose to go this route. I personally don't mind the route, technologically speaking, the only issue is the cost for me. I'm not spending $1200 for 25-30% performance improvements. I'm not spending $1200 for a gamble on whether their value-add features will be in the games I want to play. I personally thought this entire launch was an embarrassment. But the marketing/pricing aspect of Nvidia is a separate thing from the engineering - I still think the stuff they are doing under the hood is really neat. Time will tell if the neat tech pays off or falls to the wayside in the presence of more traditional architectures without all the fancy pants AI stuff and crazy scheduling.
Legit points, but personally i think they decided to just cater to the pro market, and develop for their needs, and then shovel whatever they made to the gaming market, saving any development costs, seeing as there is no competition in the gaming market atm. I think all this RTX bs they came up with, was just a way to justify all the hardware that would otherwise just have been dead weight on the gpu for consumers. If it was really something they had been working on for many years, it would have been ready at launch - i think they came up with it when they decided to just shovel the same turing gpu's to the pro and consumer market, hence why it seems so rushed, despite them having had literally 2,5 years since the launch of pascal. And yeah, it should come as no surprise that i think a scaled up pascal, with a bigger bus width, would have fared much better.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dragam1337:

Legit points, but personally i think they decided to just cater to the pro market, and develop for their needs, and then shovel whatever they made to the gaming market, saving any development costs, seeing as there is no competition in the gaming market atm. I think all this RTX bs they came up with, was just a way to justify all the hardware that would otherwise just have been dead weight on the gpu for consumers. If it was really something they had been working on for many years, it would have been ready at launch - i think they came up with it when they decided to just shovel the same turing gpu's to the pro and consumer market, hence why it seems so rushed, despite them having had literally 2,5 years since the launch of pascal.
The glass half empty side of me agrees with you. It definitely does look that way.
https://forums.guru3d.com/data/avatars/m/270/270718.jpg
That RX 580 is looking really solid here! Nice!