AMD FidelityFX Super Resolution 2.0 - Deathloop preview

Published by

Click here to post a comment for AMD FidelityFX Super Resolution 2.0 - Deathloop preview on our message forum
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I've been watching comparison videos and reading articles (including this one), seems like this is a huge leap over FSR1 but still falls a little short of DLSS in motion. That being said I don't know how Nvidia can justify dedicating die space for tensors when the quality of DLSS is only 5-10% better than this and basically no other feature uses them. Either Nvidia needs to put more value-add into Tensors or they need to go a different route with DLSS. Perhaps we will get a DLSS 3 with larger upgrades or some other features that utilize tensor with next gen but at this point I'd say AMD has parity here.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Perhaps not as good as DLSS but it's ease of implementation and wide compatibility makes up for the loss in quality. Much like with DLSS, if you've got your face right up to the screen, not moving, and constantly switching between full detail and upscaling, you will notice a difference. But if you actually play the game, such subtle details will totally go unnoticed. In a lot of games, FSR2 or DLSS2 are perfectly usable. In other games, they're unacceptable. It all depends on what you need. Same can be said of all settings that lower game detail. Whether it's shadows, reflections, texture details (only when VRAM is maxed out), anti-aliasing, resolution, etc. For some games, lowering such detail settings makes an insignificant visual difference but a major performance improvement. For some of those settings, the loss in detail is unacceptable while in others it goes unnoticed. As far as I'm concerned, FSR and DLSS aren't supposed to be on by default. They're just another way of lowering visual fidelity for more performance, just like all other graphics settings. It's weird to me how much people make a fuss about this as if they're supposed to use it, or as if it's supposed to have a 1:1 level of detail.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
HU review showed that DLSS 2.x has a bit more ghosting than FSR 2.0 But has a bit worse reconstruction in movement. So it's a bit of a trade off.
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Keeping in mind that this is open source and fully software-based, can work with practically any GPU, and doesn't require dedicated die space for it to work is a pure win. Well done, AMD.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
cryohellinc:

Keeping in mind that this is open source and fully software-based, can work with practically any GPU, and doesn't require dedicated die space for it to work is a pure win. Well done, AMD.
This is what I have been saying for years. Nvidia could of easily done this themselves with their insane R&D budgets and huge work force. But no, they create needless custom hardware and then charge the customer for the privilege of using it trying to create an even larger monopoly on the market. Much like they did with GSYNC and their custom hardware inside each monitor when they could of just supported Adaptive Sync, or buying PhysX off Ageia and locking it to their hardware only. Then community members were able to enable PhysX on AMD/ATi cards with modded drivers, proving again that Nvidia lied when they said it was only possible on their hardware. Now PhsyX is used inside tons of game engines. Nvidia make exceptional GPU's, but their business practices are just ludicrous. AMD are not innocent either they have done a lot of shady crap too just seems to be a lesser extent and the immense support of open source software just benefits the industry as a whole. EDIT: ***grabs popcorn***.....
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
Denial:

I've been watching comparison videos and reading articles (including this one), seems like this is a huge leap over FSR1 but still falls a little short of DLSS in motion. That being said I don't know how Nvidia can justify dedicating die space for tensors when the quality of DLSS is only 5-10% better than this and basically no other feature uses them. Either Nvidia needs to put more value-add into Tensors or they need to go a different route with DLSS. Perhaps we will get a DLSS 3 with larger upgrades or some other features that utilize tensor with next gen but at this point I'd say AMD has parity here.
[youtube=yl1jkmF7Xug] May end up seeing this down the line. FSR 2.0's quality mode definitely delivers here, much better than 1.0. Good job AMD.
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
It's ok for 4k, but with 1440p screen I still find it too rough vs. DLSS. And that is with DLSS Balanced (+ReShade CAS) vs. FSR 2.0 Quality. FSR 2.0 Balanced already looks very bad with 1440p target resolution. Looks like DL isn't all about marketing BS.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
CPC_RedDawn:

This is what I have been saying for years. Nvidia could of easily done this themselves with their insane R&D budgets and huge work force. But no, they create needless custom hardware and then charge the customer for the privilege of using it trying to create an even larger monopoly on the market. Much like they did with GSYNC and their custom hardware inside each monitor when they could of just supported Adaptive Sync, or buying PhysX off Ageia and locking it to their hardware only. Then community members were able to enable PhysX on AMD/ATi cards with modded drivers, proving again that Nvidia lied when they said it was only possible on their hardware. Now PhsyX is used inside tons of game engines. Nvidia make exceptional GPU's, but their business practices are just ludicrous. AMD are not innocent either they have done a lot of shady crap too just seems to be a lesser extent and the immense support of open source software just benefits the industry as a whole. EDIT: ***grabs popcorn***.....
Idk lol there's some rewriting of history here. GSync was out nearly a year before Freesync and years before Adaptive Sync. It provided more features than Freesync, even 2 years after Freesync released, and it also provided a more standard platform. You knew what you were getting when buying a gsync monitor. Freesync feature support was all over the place back then. PhysX, AFAIK, was never enabled on AMD/ATi cards. You may have been thinking when Nvidia locked PhysX on it's own cards, when an AMD card was in the system - that was bypassed, but I don't think anyone ever got PhysX to actually run on the AMD card itself. I could be wrong. Either way PhysX has been completely rewritten like 4 times, so it now running on CPU in various games is kind of meaningless. Tensors are a product of HPC shoved down into consumer GPUs. Most of that R&D was paid for by Nvidia HPC. Either way I don't think there's a reality where Nvidia releases a non-tensor card that's cheaper. We would have just gotten the same cards, with DLSS that doesn't use tensors, for the same price. Also while tensors aren't used for much stuff now (DLSS, Denoising in RT, Microphone cleanup stuff, some picture enhancements, etc) there's definitely a massive advantage with having them for potential future applications. Ignoring obvious improvements to current features (who's to say DLSS 3.0 won't massively increase performance?) DirectML is starting to take shape and it can leverage those cores. I think having the hardware creates a potential that AMD doesn't have. Whether or not we'll see something that utilizes that potential is a different story.. but Nvidia is king of value-add so I'm sure it's coming. Also Nvidia does have a similar solution, it's NIS and it's also open source but it's not a temporal solution - it may become one though in response to this. Dunno.
data/avatar/default/avatar32.webp
Nobody noticed yet that FSR2q (all FSR2 modes) either kills, or they have disabled particle effects? FSR2q shows no sparks, faeries, or added flames at all at 4k. Its impressive for static geometry Ill give it that.
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
schmidtbag:

As far as I'm concerned, FSR and DLSS aren't supposed to be on all the time. They're just another way of lowering visual fidelity for more performance, just like all other graphics settings. It's weird to me how much people make a fuss about this as if they're supposed to use it, or as if it's supposed to have a 1:1 level of detail.
DLSS and FSR2.0 allow for playable 4K gaming on mainstream GPU, I can assure you that 4K DLSS/FSR2.0 look way better than 1440p Native 🙂 Sure if you plan to stick to 1080p forever then yeah DLSS/FSR2.0 are not for you.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
CPC_RedDawn:

This is what I have been saying for years. Nvidia could of easily done this themselves with their insane R&D budgets and huge work force. But no, they create needless custom hardware and then charge the customer for the privilege of using it trying to create an even larger monopoly on the market. Much like they did with GSYNC and their custom hardware inside each monitor when they could of just supported Adaptive Sync, or buying PhysX off Ageia and locking it to their hardware only. Then community members were able to enable PhysX on AMD/ATi cards with modded drivers, proving again that Nvidia lied when they said it was only possible on their hardware. Now PhsyX is used inside tons of game engines.
Why does this PhysX BS still exist? ATi chose not to work with Nvidia on Physx, which is why Nvidia locked them out. Ati chose to partner with Havok for their physics implementation which bore no fruit. It's a fact that Ati themselves were against Physx and hardware accelerated physics in the first place, where as Nvidia saw the potential and made it mainstream. Mainstream enough that even consoles used PhysX. Your version of PhysX history is just wrong. What we really see with Nvidia is that they set the standard and Ati/AMD are always playing catch-up. DX12 is a testament to what happens when you let AMD push their agenda. You get BS like Async-compute, which AMD are more than willing to pedal as the next big thing, when in reality it was a last minute change in the DX12 specs that Nvidia weren't prepared for. The fact we now have Ray Tracing, even in a hybrid form is down to Nvidia. Now AMD has RT. DLSS also Nvidia, now AMD has FSR. If it wasn't for Nvidia those features wouldn't even exist.
data/avatar/default/avatar13.webp
Denial:

Idk lol there's some rewriting of history here. GSync was out nearly a year before Freesync and years before Adaptive Sync. It provided more features than Freesync, even 2 years after Freesync released, and it also provided a more standard platform. You knew what you were getting when buying a gsync monitor. Freesync feature support was all over the place back then. PhysX, AFAIK, was never enabled on AMD/ATi cards. You may have been thinking when Nvidia locked PhysX on it's own cards, when an AMD card was in the system - that was bypassed, but I don't think anyone ever got PhysX to actually run on the AMD card itself. I could be wrong. Either way PhysX has been completely rewritten like 4 times, so it now running on CPU in various games is kind of meaningless. Tensors are a product of HPC shoved down into consumer GPUs. Most of that R&D was paid for by Nvidia HPC. Either way I don't think there's a reality where Nvidia releases a non-tensor card that's cheaper. We would have just gotten the same cards, with DLSS that doesn't use tensors, for the same price. Also while tensors aren't used for much stuff now (DLSS, Denoising in RT, Microphone cleanup stuff, some picture enhancements, etc) there's definitely a massive advantage with having them for potential future applications. Ignoring obvious improvements to current features (who's to say DLSS 3.0 won't massively increase performance?) DirectML is starting to take shape and it can leverage those cores. I think having the hardware creates a potential that AMD doesn't have. Whether or not we'll see something that utilizes that potential is a different story.. but Nvidia is king of value-add so I'm sure it's coming. Also Nvidia does have a similar solution, it's NIS and it's also open source but it's not a temporal solution - it may become one though in response to this. Dunno.
Do you ever wonder why AMD never took the path of NVIDIA? Do you really think that AMD is so stupid ... I remind you that it works hand in hand with Microsoft for the consoles, Nvidia has worked on it yes and no once if I'm not mistaken and it is the only one that fully supports Open Source even with Freesync , FSR and so on .. Explain to me why in the so-called gaming range of cards it does not have the Deep Learning that Nvidia has with DLSS ... and wanted a more friendly approach
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
=GGC=Phantomblu:

Do you ever wonder why AMD never took the path of NVIDIA? Do you really think that AMD is so stupid ... I remind you that it works hand in hand with Microsoft for the consoles, Nvidia has worked on it yes and no once if I'm not mistaken and it is the only one that fully supports Open Source even with Freesync , FSR and so on .. Explain to me why in the so-called gaming range of cards it does not have the Deep Learning that Nvidia has with DLSS ... and wanted a more friendly approach
It's nothing to do with intelligence, it's called lack of resources, otherwise known as lack of money.
data/avatar/default/avatar33.webp
Stormyandcold:

Why does this PhysX BS still exist? ATi chose not to work with Nvidia on Physx, which is why Nvidia locked them out. Ati chose to partner with Havok for their physics implementation which bore no fruit. It's a fact that Ati themselves were against Physx and hardware accelerated physics in the first place, where as Nvidia saw the potential and made it mainstream. Mainstream enough that even consoles used PhysX. Your version of PhysX history is just wrong. What we really see with Nvidia is that they set the standard and Ati/AMD are always playing catch-up. DX12 is a testament to what happens when you let AMD push their agenda. You get BS like Async-compute, which AMD are more than willing to pedal as the next big thing, when in reality it was a last minute change in the DX12 specs that Nvidia weren't prepared for. The fact we now have Ray Tracing, even in a hybrid form is down to Nvidia. Now AMD has RT. DLSS also Nvidia, now AMD has FSR. If it wasn't for Nvidia those features wouldn't even exist.
You believe this too because if you go to KRONOS you will see that the developers proceed in the implementation of the standards together .. So the knowledge is more or less the same ... If the AMD opencl standard is stuck at 2.0 while Nvidia about drivers is 3.0 and not because AMD chases it ... It is probably believed that the exploration of certain areas still has margins ...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
=GGC=Phantomblu:

Do you ever wonder why AMD never took the path of NVIDIA?
Because it didn't have the money to build a multi-billion dollar AI software ecosystem. Pretty much all the technology for RT/DLSS are just consumer variants of HPC/Professional technology. If you go and look at any of the white papers for RT/DLSS(convolutional autoencoder)/etc its' pretty much all developed while looking for solutions to datacenter problems or in RT's case self driving cars.
=GGC=Phantomblu:

Do you really think that AMD is so stupid
I don't recall stating AMD is stupid? In fact I praise them in my first post in this thread.
=GGC=Phantomblu:

I remind you that it works hand in hand with Microsoft for the consoles
I'm not sure what this has to do with anything. Microsoft isn't really known for being a company that's innovative. Most of these features are developed by AMD/Nvidia and then those companies work with Microsoft on implementing broader support through APIs.
=GGC=Phantomblu:

Nvidia has worked on it yes and no once if I'm not mistaken and it is the only one that fully supports Open Source even with Freesync , FSR and so on
Okay?
=GGC=Phantomblu:

Explain to me why in the so-called gaming range of cards it does not have the Deep Learning that Nvidia has with DLSS ... and wanted a more friendly approach
AMD does have acceleration for tensor operations on it's consumer cards.. it just doesn't have discreet hardware for it and for all we know that changes next gen - CNDA1 and now 2 have Matrix cores. It's just known if they will hit consumer variants. They've also been working with Microsoft on developing AI upscalars. Their efforts just haven't bared any real products.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Krizby:

DLSS and FSR2.0 allow for playable 4K gaming on mainstream GPU, I can assure you that 4K DLSS/FSR2.0 look way better than 1440p Native 🙂 Sure if you plan to stick to 1080p forever then yeah DLSS/FSR2.0 are not for you.
You kinda missed my point... DLSS and FSR are just one many methods of sacrificing some detail in order to either attain higher refresh rates or higher resolutions (or both), which is fine. Same goes for any other form of graphical settings you decrease. Some settings have more of a performance impact than others. Some settings have a greater visual impact than others. It all depends on the game and personal preferences. In most cases, I would argue DLSS and FSR yield a much greater performance gain than there is a fidelity loss.
https://forums.guru3d.com/data/avatars/m/174/174772.jpg
schmidtbag:

You kinda missed my point... DLSS and FSR are just one many methods of sacrificing some detail in order to either attain higher refresh rates or higher resolutions (or both), which is fine. Same goes for any other form of graphical settings you decrease. Some settings have more of a performance impact than others. Some settings have a greater visual impact than others. It all depends on the game and personal preferences. In most cases, I would argue DLSS and FSR yield a much greater performance gain than there is a fidelity loss.
You could look at differently and say that DLSS is a method to get more details at playable framerates, running those heavily ray traced games is not a light task. Just saying, the increase is in some cases higher than the decrease that DLSS/FSR brings along.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Undying:

Less ghosting with fsr2.0 https://www.linkpicture.com/q/Untitled_18.png
What version of DLSS is this game using now? I know when it shipped most people complained about the ghosting and manually updating fixed it but did they ever officially update the dll?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Denial:

What version of DLSS is this game using now? I know when it shipped most people complained about the ghosting and manually updating fixed it but did they ever officially update the dll?
@Krizby mention its using 2.3.0 ver. Its not the one game shipped with but updated so probably still is. Didnt check myself I only used dlss swapper for dying light 2.