NVIDIA DLSS 2.1 Technology Will Feature Virtual Reality Support

Published by

Click here to post a comment for NVIDIA DLSS 2.1 Technology Will Feature Virtual Reality Support on our message forum
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
First game to get it is a year old. Let's wait until 2021 to really see how this will be adopted with new games... I don't doubt that CP2077 has a lot of tech toys and gadgets to play with, but other games? Not so sure. Same as with Turing's release and the adoption rate of both RTX and DLSS 1.0 and later in games.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
In VR, it is fight for every single pixel. There is no space for reduction of IQ. Would we have access to 4096x4096 screens as fast as those in index, one could get away with faking. But in VR, pixels cover so big viewing angle that it is like playing on 1280x1024 32'' screen. And again, we get into absurdity scenarios. Buy expensive headset, buy expensive PC, but cheap out on GPU. Therefore use DLSS x9 to have playable fps at whatever IQ costs it brings. VR today uses 2880x1600 in case of index. And even 3070 with that good price is able to do above 60fps in most demanding titles. And there is ASW already for VR. (Which is "fake" => transformation of full resolution already.) So, imagine DLSS taking your previous frame, moving parts around in way it believes image will look in current frame, then VR taking this frame and moving/warping it again. More the merrier, right? I hope AMD improves CAS instead.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
Nvidia is desperate for marketing. They try to copy the "hdmi 2.1 is better than hdmi 2.0" . 😛 Dlss 2.1 with new gpus with more features than dlss 2.0. 😀
data/avatar/default/avatar30.webp
I dont get it.. Is DLSS a software but hardware depended? You write DLSS 2 is working with RTX 20xx/30xx. But at the same time DLSS 2 in wolfenstein only for 3090 card ?
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Maybe this will all change in next few days when Facebook announce their next gen headsets. Maybe it will have some form of next generation foveated rendering or something else - anyone that has seen the videos of the research that Facebook VR division has done in last few years trying to do this will know the massive amount of work & money they have put in, and number of failed prototypes and research done, and none of this work has seen the light of day in a consumer headset yet. I have a feeling we will have another 2080Ti situation with current headsets (and all the software tricks used currently being obsolete) when or if they come out. What we need is intelligent foveated rendering with some form of eye tracking, as well as the whole multiple focus depth of field stuff, and this all needs to work with NVidias GPUs to maximise efficiency and not waste time rendering edge of eye areas at same resolution as where the eye is currently looking at - which should be done at full super sampled real resolution with no DLSS type trickery in this area.
data/avatar/default/avatar04.webp
I have Pimax 8kx which already runs at high resolution in native mode with 2080ti. Not sure what Dlss will do in my case?
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
Sounds interesting, will follow reviews.
https://forums.guru3d.com/data/avatars/m/99/99652.jpg
Seems to me it should be much more valuable for VR than on a monitor.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I would be a bit concerned about any added latency. The more post-processing you do, the more you're delaying each frame from rendering, and that contributes toward nausea. Perhaps it isn't significant enough.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

I would be a bit concerned about any added latency. The more post-processing you do, the more you're delaying each frame from rendering, and that contributes toward nausea. Perhaps it isn't significant enough.
Why would it add latency at all? In every case that I know of DLSS increases framerate, which lowers the latency of the frame.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Why would it add latency at all? In every case that I know of DLSS increases framerate, which lowers the latency of the frame.
A higher frame rate means lower frame time, but it says nothing about when the frame is rendered on your display. Whether you're at 60FPS or 600, it doesn't matter if the frame you're seeing doesn't line up with your input commands. Think of it like this: Imagine having a high-speed camera attached to a high refresh rate display. Thanks to all the image processing, what you see on the display is delayed by a few milliseconds. So even though you might be looking at an image at 300FPS, you're still distinctly looking at something in the past. When it comes to GPUs, that delay is even greater, because they're reconstructing the entire image themselves, as opposed to just simply capturing one. That's why some people hate v-sync: on a 60Hz display, the frame rate you see is the same but v-sync adds a lot to latency, so people who are sensitive to that will feel a constant delay. Having said all that, all post-processing requires a frame to be rendered, and then more clock cycles are used to continue modifying the image (hence the name). So, for every additional layer of PP you do, the more you're delaying the image from being rendered on the display. Yes, the frame rate is going up and that's good, but when it comes to VR, getting the lowest latency possible is critical.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

A higher frame rate means lower frame time, but it says nothing about when the frame is rendered on your display. Whether you're at 60FPS or 600, it doesn't matter if the frame you're seeing doesn't line up with your input commands. Think of it like this: Imagine having a high-speed camera attached to a high refresh rate display. Thanks to all the image processing, what you see on the display is delayed by a few milliseconds. So even though you might be looking at an image at 300FPS, you're still distinctly looking at something in the past. When it comes to GPUs, that delay is even greater, because they're reconstructing the entire image themselves, as opposed to just simply capturing one. That's why some people hate v-sync: even on a 60Hz display, the frame rate you see is the same but v-sync adds a lot to latency, so people who are sensitive to that will feel a constant delay. Having said all that, all post-processing requires a frame to be rendered, and then more clock cycles are used to continue modifying the image (hence the name). So, for every additional layer of PP you do, the more you're delaying the image from being rendered on the display. Yes, the frame rate is going up and that's good, but when it comes to VR, getting the lowest latency possible is critical.
But the post processing is built into the framerate? Say you're targeting 60fps - 16.7ms.. it's not 16.7ms + post processing. It's just 16.7ms and part of that 16.7ms includes the post processing. So if DLSS allows them to do 144fps (7ms or 6.9 or whatever it is) where they normally couldn't achieve that, it's just decreasing the total latency.
data/avatar/default/avatar34.webp
If they want to try this in an already established game, they should get Frontier to add it to Elite Dangerous. Be a good place to test it at least, same with No Mans Sky.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

But the post processing is built into the framerate? Say you're targeting 60fps - 16.7ms.. it's not 16.7ms + post processing. It's just 16.7ms and part of that 16.7ms includes the post processing. So if DLSS allows them to do 144fps (7ms or 6.9 or whatever it is) where they normally couldn't achieve that, it's just decreasing the total latency.
Again... it doesn't matter what the frame rate is, what matters is when you see the frame. Rendering more frames per second has nothing at all to do with when the fully processed frame reaches your eyes.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Again... it doesn't matter what the frame rate is, what matters is when you see the frame. Rendering more frames per second has nothing at all to do with when the fully processed frame reaches your eyes.
It absolutely does lol.. If I can only render 1 frame per hour - how long is it going to take to get the fully processed frame to reach my eye? More than an hour right? So why would that change when the frame is being done in 1/60th of a second? Latency is composed of the following: https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/news/reflex-low-latency-platform/nvidia-reflex-end-to-end-systeme-latency-pipline.png In our example the render latency is 16.7ms at 60fps. If you decrease the latency to 7ms (144 fps) and the rest of the chain stays identical - then you effectively lowered the time it takes for the final frame to reach your eye by 9.7ms.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

If I can only render 1 frame per hour - how long is it going to take to get the fully processed frame to reach my eye? More than an hour right? So why would that change when the frame is being done in 1/60th of a second?
That's not what I meant... My point is latency is not entirely dependent on frame rate. Like I said with my camera example, it doesn't matter how fast the camera is because processing the image, sending the signal to the display, and then having the display render that signal takes time. You can have a perfectly smooth experience that is delayed by several seconds. Ever been on a phone call with someone where there was a significant delay? It's not like you or the other end is thinking any slower than you normally would. Completing something faster doesn't mean you deliver the results faster. That's why I brought up the v-sync example, because even though the perceived frame rate is the same, the synced frames are literally older.
In our example the render latency is 16.7ms at 60fps. If you decrease the latency to 7ms (144 fps) and the rest of the chain stays identical - then you effectively lowered the time it takes for the final frame to reach your eye by 9.7ms.
I understand that, but I'm not necessarily referring to the rest of the chain. That chain is also more complicated than that once DLSS is involved, because there's a lot more back and forth talking for the AI and the tensor cores. That is where the added latency comes in. So, even if you were to play the same game without DLSS but lowered the detail level enough where you get the same frame rate, the latency should be better, because the tensor cores are not involved. The thing about DLSS is, in theory, it can be done in-parallel. The next frame could be rendered while the tensor cores are doing their work. So, the total amount of time the frame is being rendered is longer, but you're able to render more FPS. This I could be wrong about, though. Clearly, the tensor cores do add a significant delay, because on Nvidia's own website, they mention that DLSS is not enabled if a high enough frame rate can be achieved without it:
To put it a bit more technically, DLSS requires a fixed amount of GPU time per frame to run the deep neural network. Thus, games that run at lower frame rates (proportionally less fixed workload) or higher resolutions (greater pixel shading savings), benefit more from DLSS. For games running at high frame rates or low resolutions, DLSS may not boost performance. When your GPU’s frame rendering time is shorter than what it takes to execute the DLSS model, we don’t enable DLSS. We only enable DLSS for cases where you will receive a performance gain.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

That's not what I meant... My point is latency is not entirely dependent on frame rate. Like I said with my camera example, it doesn't matter how fast the camera is because processing the image, sending the signal to the display, and then having the display render that signal takes time. You can have a perfectly smooth experience that is delayed by several seconds. Ever been on a phone call with someone where there was a significant delay? It's not like you or the other end is thinking any slower than you normally would. Completing something faster doesn't mean you deliver the results faster. That's why I brought up the v-sync example, because even though the perceived frame rate is the same, the synced frames are literally older. I understand that, but I'm not necessarily referring to the rest of the chain. That chain is also more complicated than that once DLSS is involved, because there's a lot more back and forth talking for the AI and the tensor cores. That is where the added latency comes in. So, even if you were to play the same game without DLSS but lowered the detail level enough where you get the same frame rate, the latency should be better, because the tensor cores are not involved. The thing about DLSS is, in theory, it can be done in-parallel. The next frame could be rendered while the tensor cores are doing their work. So, the total amount of time the frame is being rendered is longer, but you're able to render more FPS. This I could be wrong about, though. Clearly, the tensor cores do add a significant delay, because on Nvidia's own website, they mention that DLSS is not enabled if a high enough frame rate can be achieved without it:
You keep comparing this to some other scenario where latency is increasing somehow, but you're not explaining why it's increasing. Yeah I've been on a phone call where there was delay - but why was there delay? And even if there is a delay - if I was able to cut 9.7ms off the processing on the phone - wouldn't there be less delay overall? That's what going from 60-144fps is doing, that's what DLSS is doing (if it's increasing it from 60-144, obviously it isn't this much on average).
Completing something faster doesn't mean you deliver the results faster.
It 100% does because it's the only thing that's changing. The rest of the pipeline is identical, the rendering latency is decreasing, thus the entire chain is getting quicker.
So, even if you were to play the same game without DLSS but lowered the detail level enough where you get the same frame rate, the latency should be better, because the tensor cores are not involved.
I fundamentally disagree with this and I think this is where our difference of opinion lies. If the frame-rate is identical than the latency is identical (given that the mouse/cpu/screen/etc is all identical). The DLSS is only adding latency to the portion of the pipeline that falls under "render latency" - which is equal to your framerate. If the render latency is 16.7ms, you're getting 60fps. If turning DLSS on increases the framerate to 144fps then it's also decreasing the latency to 7ms. Those two properties are linked - there is no other area where latency is increasing to turn DLSS on.
Clearly, the tensor cores do add a significant delay
DLSS doesn't add a significant delay, it adds a fixed delay - when that happens, sometimes it's quicker to simply render the frame at the native resolution faster than that fixed delay. For example, say we have a frame that takes 20ms to render at 4K but only 10ms at 1080P and DLSS takes 6.7ms to upscale it to 4K. Now you have a 4K DLSS image at 16.7ms compared to 4K regular image at 20ms. Naturally your framerate is increased with DLSS because you can do that more times per second. But in another example, let's say you have a frame that takes 6ms to render at 4K but 2ms at 1080P. Now you have a 4K DLSS image at 8.7ms (2ms + our 6.7ms "DLSS delay" vs a native 4K image at 6ms. It's better to turn DLSS off in that case.
https://forums.guru3d.com/data/avatars/m/175/175871.jpg
Who the hell cares? Nice to see on paper, and it's effect with the old 200 series cards.....but unfortunately, the fact that crypto-mining is back, eliminates any availability of 3 series cards. They (3 series) aren't even released, yet somehow folks in China are being able to purchase literal CASE loads. So, I'm gonna say it will be a repeat of last time, and the 3080 will be well above $2000 when it is eventually available....fu88ing 2020.....
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

You keep comparing this to some other scenario where latency is increasing somehow, but you're not explaining why it's increasing.
Well, I figured for someone like yourself, the reason would be obvious: because communication with the tensor cores adds to the render time. Think of it like AMD's CCX design - the communication between the chiplets adds latency, but each individual core has good IPC, so certain workloads, you can get better performance than Intel despite the added latency.
Yeah I've been on a phone call where there was delay - but why was there delay? And even if there is a delay - if I was able to cut 9.7ms off the processing on the phone - wouldn't there be less delay overall? That's what going from 60-144fps is doing, that's what DLSS is doing (if it's increasing it from 60-144, obviously it isn't this much on average).
The delay is for various reasons, but the point I'm making here is neither you or the other end of the line is "operating slower". The level of performance does not affect when the signal is received. If we're talking about going from 60FPS without DLSS to 144FPS with it on then yes, the performance gain of DLSS is significant enough that, despite the added latency of the tensor cores, there is likely an overall latency improvement. But, that's also an unrealistically huge jump in performance. In real-world scenarios, it'd be more like going from 60FPS to 85FPS. I don't know how much added latency the tensor cores make, so it's hard to know if the reduced frame time will make up for it. If you were to instead lower the detail level instead of use DLSS, you're bound to get better latency.
It 100% does because it's the only thing that's changing. The rest of the pipeline is identical, the rendering latency is decreasing, thus the entire chain is getting quicker.
No, it isn't. Tensor cores add to the pipeline.
The DLSS is only adding latency to the portion of the pipeline that falls under "render latency" - which is equal to your framerate. If the render latency is 16.7ms, you're getting 60fps. If turning DLSS on increases the framerate to 144fps then it's also decreasing the latency to 7ms. Those two properties are linked - there is no other area where latency is increasing to turn DLSS on.
I don't disagree that DLSS adding more FPS will decrease latency per-frame. I'm saying that compared to a scene with the same frame rate and no post-processing, the frames rendered by DLSS will arrive to the user later. So - DLSS is overall a good thing, because yes, if the frame rate drops enough, it can offer better latency and frame rates without sacrificing detail. But my argument is that sacrificing detail ought to yield better latency, which is why Nvidia turns DLSS off when the frame rate is too high (because otherwise the tensor cores slow things down).
DLSS doesn't add a significant delay, it adds a fixed delay - when that happens, sometimes it's quicker to simply render the frame at the native resolution faster than that fixed delay.
A fixed delay is still a delay, and it can be significant when we're talking about reducing motion sickness.
For example, ...
I agree with all of that.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Well, I figured for someone like yourself, the reason would be obvious: because communication with the tensor cores adds to the render time.
No it doesn't. That's my point. If the framerate with DLSS on is higher, then communication with the tensor cores is literally never adding to the render time vs's the render time of a 4K native image. It's always decreasing it vs that native image. You can't have a higher framerate with a higher render latency. That's an oxymoron. You can have a higher framerate with a higher total latency - because either the screen got slower, or you have a terrible mouse, or your OS is lagging. But the "Render Latency" aka the amount of time for a GPU to process the frame from start to finish is linked to the framerate. If the framerate goes up it's because the GPU is processing frames in less time. DLSS has a fixed "upres" time, probably associated with the render resolution. Let's go back to the slide: https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/news/reflex-low-latency-platform/nvidia-reflex-end-to-end-systeme-latency-pipline.png Let's say we have a GPU capable of doing 4K@60 it would look like: Mouse latency is 2ms + 4ms for CPU (OS/Game) + 16.7ms for Render Latency (Native 4K image) (Render Queue + GPU) + 1ms for composite + 10ms for Scanout/Display. Total of 33.7ms from the time you move your mouse to the time the frame hits your eye. Now let's say we turn DLSS on and let's say communication with the tensor cores adds 4ms of delay but it's on and we're getting 90fps @ 4K now. it would look like: Mouse latency is 2ms + 4 for CPU (OS/Game) + 7.11ms for Render Latency (Native 1080p image) + 4ms (DLSS/Tensor Upres to 4K) (11.11ms total (90fps) (Render Queue + GPU) + 1ms for composite + 10ms for Scanout/Display. Total of 28.11ms from the time you move your mouse to the time the frame hits your eye. The only thing that's changing is the latency in the "Render Queue + GPU" which is where the framerate is generated from. So no matter what if the framerate is increasing, that latency is decreasing (1000/60) vs (1000/90). Anyway I feel like we're either talking past each other or reached an impasse, so i'll leave it at this.