NVIDIA DLSS 2.1 Technology Will Feature Virtual Reality Support
Click here to post a comment for NVIDIA DLSS 2.1 Technology Will Feature Virtual Reality Support on our message forum
fantaskarsef
First game to get it is a year old. Let's wait until 2021 to really see how this will be adopted with new games... I don't doubt that CP2077 has a lot of tech toys and gadgets to play with, but other games? Not so sure. Same as with Turing's release and the adoption rate of both RTX and DLSS 1.0 and later in games.
Fox2232
In VR, it is fight for every single pixel. There is no space for reduction of IQ.
Would we have access to 4096x4096 screens as fast as those in index, one could get away with faking.
But in VR, pixels cover so big viewing angle that it is like playing on 1280x1024 32'' screen.
And again, we get into absurdity scenarios. Buy expensive headset, buy expensive PC, but cheap out on GPU. Therefore use DLSS x9 to have playable fps at whatever IQ costs it brings.
VR today uses 2880x1600 in case of index. And even 3070 with that good price is able to do above 60fps in most demanding titles.
And there is ASW already for VR. (Which is "fake" => transformation of full resolution already.)
So, imagine DLSS taking your previous frame, moving parts around in way it believes image will look in current frame, then VR taking this frame and moving/warping it again.
More the merrier, right?
I hope AMD improves CAS instead.
itpro
Nvidia is desperate for marketing. They try to copy the "hdmi 2.1 is better than hdmi 2.0" . 😛 Dlss 2.1 with new gpus with more features than dlss 2.0. 😀
mbm
I dont get it..
Is DLSS a software but hardware depended?
You write DLSS 2 is working with RTX 20xx/30xx. But at the same time DLSS 2 in wolfenstein only for 3090 card ?
geogan
Maybe this will all change in next few days when Facebook announce their next gen headsets.
Maybe it will have some form of next generation foveated rendering or something else - anyone that has seen the videos of the research that Facebook VR division has done in last few years trying to do this will know the massive amount of work & money they have put in, and number of failed prototypes and research done, and none of this work has seen the light of day in a consumer headset yet.
I have a feeling we will have another 2080Ti situation with current headsets (and all the software tricks used currently being obsolete) when or if they come out.
What we need is intelligent foveated rendering with some form of eye tracking, as well as the whole multiple focus depth of field stuff, and this all needs to work with NVidias GPUs to maximise efficiency and not waste time rendering edge of eye areas at same resolution as where the eye is currently looking at - which should be done at full super sampled real resolution with no DLSS type trickery in this area.
Brogs
I have Pimax 8kx which already runs at high resolution in native mode with 2080ti. Not sure what Dlss will do in my case?
XenthorX
Sounds interesting, will follow reviews.
Lex Luthor
Seems to me it should be much more valuable for VR than on a monitor.
schmidtbag
I would be a bit concerned about any added latency. The more post-processing you do, the more you're delaying each frame from rendering, and that contributes toward nausea. Perhaps it isn't significant enough.
Denial
schmidtbag
Denial
Bobdole776
If they want to try this in an already established game, they should get Frontier to add it to Elite Dangerous.
Be a good place to test it at least, same with No Mans Sky.
schmidtbag
Denial
https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/news/reflex-low-latency-platform/nvidia-reflex-end-to-end-systeme-latency-pipline.png
In our example the render latency is 16.7ms at 60fps. If you decrease the latency to 7ms (144 fps) and the rest of the chain stays identical - then you effectively lowered the time it takes for the final frame to reach your eye by 9.7ms.
It absolutely does lol..
If I can only render 1 frame per hour - how long is it going to take to get the fully processed frame to reach my eye? More than an hour right? So why would that change when the frame is being done in 1/60th of a second?
Latency is composed of the following:
schmidtbag
Denial
phawkins633
Who the hell cares? Nice to see on paper, and it's effect with the old 200 series cards.....but unfortunately, the fact that crypto-mining is back, eliminates any availability of 3 series cards. They (3 series) aren't even released, yet somehow folks in China are being able to purchase literal CASE loads. So, I'm gonna say it will be a repeat of last time, and the 3080 will be well above $2000 when it is eventually available....fu88ing 2020.....
schmidtbag
Denial
https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/news/reflex-low-latency-platform/nvidia-reflex-end-to-end-systeme-latency-pipline.png
Let's say we have a GPU capable of doing 4K@60 it would look like: Mouse latency is 2ms + 4ms for CPU (OS/Game) + 16.7ms for Render Latency (Native 4K image) (Render Queue + GPU) + 1ms for composite + 10ms for Scanout/Display. Total of 33.7ms from the time you move your mouse to the time the frame hits your eye.
Now let's say we turn DLSS on and let's say communication with the tensor cores adds 4ms of delay but it's on and we're getting 90fps @ 4K now. it would look like: Mouse latency is 2ms + 4 for CPU (OS/Game) + 7.11ms for Render Latency (Native 1080p image) + 4ms (DLSS/Tensor Upres to 4K) (11.11ms total (90fps) (Render Queue + GPU) + 1ms for composite + 10ms for Scanout/Display. Total of 28.11ms from the time you move your mouse to the time the frame hits your eye.
The only thing that's changing is the latency in the "Render Queue + GPU" which is where the framerate is generated from. So no matter what if the framerate is increasing, that latency is decreasing (1000/60) vs (1000/90).
Anyway I feel like we're either talking past each other or reached an impasse, so i'll leave it at this.
No it doesn't. That's my point. If the framerate with DLSS on is higher, then communication with the tensor cores is literally never adding to the render time vs's the render time of a 4K native image. It's always decreasing it vs that native image. You can't have a higher framerate with a higher render latency. That's an oxymoron. You can have a higher framerate with a higher total latency - because either the screen got slower, or you have a terrible mouse, or your OS is lagging. But the "Render Latency" aka the amount of time for a GPU to process the frame from start to finish is linked to the framerate. If the framerate goes up it's because the GPU is processing frames in less time.
DLSS has a fixed "upres" time, probably associated with the render resolution.
Let's go back to the slide: