Nvidia DLSS 3.5 in Gaming: Implications and AI-driven Future of Graphics Rendering
Click here to post a comment for Nvidia DLSS 3.5 in Gaming: Implications and AI-driven Future of Graphics Rendering on our message forum
GlassGR
don't expect new chips , more transistors and ram .
there will be a "flagship" with a trillion transistors and all the ram that most of us wont be able to buy and all the affordable configurations will be a.i. model upgrades.
Sylwester Zarębski
"AI" is a buzzword, even bigger than 3D before.
geogan
I watched that entire 1 hour long video from DF - very interesting.
Yes the main point was that DLSS 3.5 generated (path traced and ray-reconstructed de-noising) frames are now *better* than could be generated natively.
And their view that old style rendering technique (rasterization) are inherently "fake" at every step of the pipeline and so old style rendering is the actual "fake" frames, and AI based rendering is more real and produces more real quality truth than old style fake rendering pipelines.
ie. we have now passed the point where the old ways of doing it are now much worse than this new way in every way - there are no longer any downsides mostly.
And I would 100% agree with them.
schmidtbag
Something about this doesn't really make sense to me:
First and foremost, how do you produce a better-than-native image when upscaling? Sure, some details might look better, but the whole thing about AI upscaling is that it's giving a best effort to fill in missing data. It can (and does) do an incredible job, but when going across frames, I don't get how an AI could do better.
But let's say for a moment you're enabling DLSS 3.5 without upscaling, so, the input resolution is the same as the output: while I get how an AI could help with things like denoising or smoothing edges, I don't understand how it could otherwise make a more realistic image than with it off. The AI has to be trained on what "more realistic" is supposed to look like, so how is it supposed to do that in a fictional universe? What trained it to know what a better render is supposed to look like, if all it can be trained on is rastered renderings? That's like giving a definition to a word by using the word in the definition. Or, that's like trying to describe what the color red looks like to a blind person. So, how does it make sense to generate a more realistic image when the AI doesn't have real-world images?
Denial
H83
schmidtbag
Denial
Why does it only have to be trained on raster images? Why couldn't it be trained on datasets that contain pathtraced images at extremely high sample counts? (fwiw the lead dev of DLSS said it's trained on raytraced datasets)
In that video it was implied that they are talking about a rastered native image vs a AI Reconstructed RT image. The point being that AI reconstruction allows more physically accurate techniques to be possible and thus exceeding the visual level of a natively rastered image.
H83
schmidtbag
mbk1969
https://forums.guru3d.com/threads/info-zone-gengines-ray-tracing-dlss-dlaa-dldsr-tsr-fsr-xess-and-mods-etc.439761/page-112#post-6169139
They do compare not native resolution RT image versus upscaled RT image, but native resolution with old rasterizing tricks (the case before DLSS 3.5) versus upscaling with ray reconstruction (DLSS 3.5).
Denial
Dribble
mattm4
All of these AI(software) enhancements are nice. But its a slippery slope as NVidia has proved with the 40 series over the 30 series. No generational improvements for some cards, but instead rely on AI (software) +upscaling to generate more frames. which equals more $$$$ since they can save money by skimping out on the hardware side. Hopefully next generation will be more of a happy balance, between the two, instead of going strait to software for performance improvements.
geogan
umeng2002
It depends a lot on the game content. I recently played the Dead Space remake at 1440p. Even with the LOD mods for DLSS, DLSS Quality wasn't as good as native TAA. But for the most part, I find that DLSS Quality is more than acceptable - great, even, for clearing up temporal artifacts.
At 1440p in Cyberpunk, DLSS Balanced is no bueno.
H83
https://www.techpowerup.com/review/nvidia-dlss-35-ray-reconstruction/
Fair enough. You clearly know much more about this than i do, so i´ll take your word, but i continue to believe that they are exaggerating their claims.
Anyway, for those interested TPU already has an article about this: Dribble
moo100times
Gamer Nexus's last video was quite informative on the effectiveness of the new approach for me. Lots of clips and comparisons vs stills which served better as a reference point.
[youtube=zZVv6WoUl4Y]
As for the technical side, definitely interesting ideas, some of it is definitely marketing speak though in how wonderful this improvement is when it is still a developing and evolving concept, but it seems to deliver decent visual fidelity improvements upon what was previously available.
I would like to see their implementation vs UE5 implementation for denoising comparisons, as UE5 report their algorithm is comparable to offline ray tracing rendering, which is an impressive claim.
AuerX