AMD files possible patent for NVIDIA DLSS alternative called Gaming Super Resolution

Published by

Click here to post a comment for AMD files possible patent for NVIDIA DLSS alternative called Gaming Super Resolution on our message forum
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Mineria:

Nothing wrong with it if it is of good quality, I had membranes worn out on plenty, only a few in between that lasted for years.
Microsoft wired keyboard 600 .... i had it 10 years worked like a champ survived a coffee spill too .... i brought the same i just love it ! still works just a lot of the keys faded finally so i changed it :P
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
slicer:

Latest MLID video shed some light to FSR.
Sorry, stopped reading at 'MLID'
https://forums.guru3d.com/data/avatars/m/56/56004.jpg
Coming close to end of May, getting kinda anxious to see what FSR brings to the table. Honestly, I don't expect framerate uplift to be that ridiculous 200% being bandied about, but if there's an uplift of even 40% or higher, with image quality being close to native (I don't expect it to match nVidia's DLSS 2.0 either, after all, nVidia has a head start, and it's proprietary, guaranteed to work well within its walled/enclosed ecosystem, much like Apple's IOS). As long as I can get a decent performance uplift with as little loss of PQ (especially with RT enabled), then I'd be one happy gamer. And, IF the tech works with GTX 1000 series card owners, all the better!
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
All of this will basically be a tradeoff between the amount of processing power the shaders will need to use to upscale, vs what they would have needed to "really" render. For even remotely equal results to DLSS, it will need to use some sort of ML.
https://forums.guru3d.com/data/avatars/m/273/273323.jpg
If I had to guess, my bet would be that AMD's solution works akin to DLSS 1.9 (as it was called by Digital Foundry) in Control -- at the time, that version ran on the standard GPU cores (shader cores iirc? will have to rewatch their video) and did not run on the Tensor cores like the current version. A solution like that could theoretically be generalized I would expect. Nvidia's requires dedicated hardware if I understand right (Tensor cores) and uses their AI training model. From what I gather there a big differences between 2.1 (current) and 1.9 (Control generic one), but I'm just specuating. To be honest, I really have no idea how AMDs will work -- will it require Temporal AntiAliasing and Motion Vectors like DLSS does? Is it just a really good uspcaling algorithm with temporal upscampling like unreal? Really curious to see how it ends up working since Nvidia's solution requires the AI model.
data/avatar/default/avatar16.webp
BlindBison:

If I had to guess, my bet would be that AMD's solution works akin to DLSS 1.9 (as it was called by Digital Foundry) in Control -- at the time, that version ran on the standard GPU cores (shader cores iirc? will have to rewatch their video) and did not run on the Tensor cores like the current version. A solution like that could theoretically be generalized I would expect. Nvidia's requires dedicated hardware if I understand right (Tensor cores) and uses their AI training model. From what I gather there a big differences between 2.1 (current) and 1.9 (Control generic one), but I'm just specuating. To be honest, I really have no idea how AMDs will work -- will it require Temporal AntiAliasing and Motion Vectors like DLSS does? Is it just a really good uspcaling algorithm with temporal upscampling like unreal? Really curious to see how it ends up working since Nvidia's solution requires the AI model.
DLSS (whichever version) as released by NVIDIA always worked on the Tensor Cores. If Control used as CUDA core based approach then it was something they created themselves. This patent from AMD is, much like almost all current patents, written by patent lawyers and as such almost impossible to trace back to the actual technical implementation of AMD's technology. This is on purpose, not to hide the way it works, but to cover as much IP ground as possible in order to ensure that you can attack others trying to solve the same problem as you are solving with your patented solution. The broader the patent the easier it becomes to hit others with infringement claims when they enter the same space. The advantage that DLSS gives NVIDIA is that because they are using specialized hardware for executing it, they don't have to reserve capacity in the general purpose parts of the GPU. This means that those parts are available for the rest of the rendering pipeline. With AMD's generic solution they will have to tap into these general purpose parts and thus take performance away from the rest of the rendering pipeline. How much that will be and what the impact is on the total rendering pipeline is something that we will only see when they release their solution.
https://forums.guru3d.com/data/avatars/m/273/273323.jpg
Crazy Joe:

DLSS (whichever version) as released by NVIDIA always worked on the Tensor Cores. If Control used as CUDA core based approach then it was something they created themselves. This patent from AMD is, much like almost all current patents, written by patent lawyers and as such almost impossible to trace back to the actual technical implementation of AMD's technology. This is on purpose, not to hide the way it works, but to cover as much IP ground as possible in order to ensure that you can attack others trying to solve the same problem as you are solving with your patented solution. The broader the patent the easier it becomes to hit others with infringement claims when they enter the same space. The advantage that DLSS gives NVIDIA is that because they are using specialized hardware for executing it, they don't have to reserve capacity in the general purpose parts of the GPU. This means that those parts are available for the rest of the rendering pipeline. With AMD's generic solution they will have to tap into these general purpose parts and thus take performance away from the rest of the rendering pipeline. How much that will be and what the impact is on the total rendering pipeline is something that we will only see when they release their solution.
Digital Foundry's Control video where Alex dubs it "1.9" he specifically states that that particular version was "hand tuned" and ran on the shader cores unlike all the other versions. It was a one off from what I gather/all other versions did use the Tensor Cores as you say.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
XenthorX:

Sorry, stopped reading at 'MLID'
oh wait,wait,i'm getting some new info from my sources [SPOILER] https://media1.fdncms.com/riverfronttimes/imager/u/slideshow/2600028/butt_head.jpg[/SPOILER]
BlindBison:

If I had to guess, my bet would be that AMD's solution works akin to DLSS 1.9 (as it was called by Digital Foundry) in Control -- at the time, that version ran on the standard GPU cores (shader cores iirc? will have to rewatch their video) and did not run on the Tensor cores like the current version. A solution like that could theoretically be generalized I would expect. Nvidia's requires dedicated hardware if I understand right (Tensor cores) and uses their AI training model. From what I gather there a big differences between 2.1 (current) and 1.9 (Control generic one), but I'm just specuating. To be honest, I really have no idea how AMDs will work -- will it require Temporal AntiAliasing and Motion Vectors like DLSS does? Is it just a really good uspcaling algorithm with temporal upscampling like unreal? Really curious to see how it ends up working since Nvidia's solution requires the AI model.
even if it's half as good as dlss 2.0,not releasing it is far worse.