AMD FidelityFX Super Resolution in Ultra Quality shows visible quality loss

Published by

Click here to post a comment for AMD FidelityFX Super Resolution in Ultra Quality shows visible quality loss on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Still, DLSS is a deep learning technique that uses artificial intelligence to improve rendered frames, while FidelityFX Super Resolution looks to be merely an algorithmic spatial scaler. As such FSR does not use any machine learning or inference and while it is an interesting tool, I think we can already safely state that it is in no way comparable to an AI-powered image scaling system.
well written dlss depends on training [youtube=QJHUbtR0yI8] but a spatial upscaler will always lose detail,without a possibility to reconstruct it
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
On other hand DLSS and FFX are both thing to compensate the real problem: the GPU doesn't kick in RT as those company have promess to consumer. Because even with the latest DLSS, an experimented eyes can see that the speed have a dramatical cost.
https://forums.guru3d.com/data/avatars/m/277/277158.jpg
Personally, I think that if you're going to take 100+ consecutive images from a 100 FPS game and look at them individually for 'razor sharpness' then you're a bloody idiot.
data/avatar/default/avatar26.webp
of course it does - without AI HW to "predict" the missing pixels this is nothing but a gimmick by AMD - a fake alternative for DLSS... its not the first time AMD.. kind of like how they branded a standard PCIE spec feature (resizable bar) as "smart access" (lol always makes me laugh)... but you know what? the fanboys gobble it up and its working so why not?
https://forums.guru3d.com/data/avatars/m/277/277158.jpg
ViperAnaf:

...without AI HW to "predict" ...
/sigh, you don't need AI hardware to predict anything. You people really need to get that in your heads.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
ViperAnaf:

of course it does - without AI HW to "predict" the missing pixels this is nothing but a gimmick by AMD - a fake alternative for DLSS... its not the first time AMD.. kind of like how they branded a standard PCIE spec feature (resizable bar) as "smart access" (lol always makes me laugh)... but you know what? the fanboys gobble it up and its working so why not?
Well apart from what you're saying being nonsense, are you then OK with nvidia branding industry standards in their own names? 🙂
https://forums.guru3d.com/data/avatars/m/229/229454.jpg
I'm sorry but: "...a Reddit user grab video where the FSR performance advantages were shown at 4K resolution capturing of a few frames of the technology comparing 'Ultra Quality' (the one with the least loss of graphic quality. ) in the form of a BMP file so that it does not lose quality..." We are still talking about taking a snapshot from a compressed video, no matter what image format you use. It's never going to match what's actually on screen. Especially as YT videos are actually compressed twice, once during recording, again during YT processing so YT video is never as good as the original video. Besides upscaling has worse quality than native, why is this news? DLSS is no different even if latest iterations are very good. The question is, is the resulting image good enough for one to make that performance-quality tradeoff. I won't even look at any "video grabs" but judge for myself once tech is released. Trying to assess image quality with lossy sources is about as pointless as it gets. "...The user enlarge the image x4 in order to see the visual differences between the native quality and the FSR in Ultra..." If you need to zoom in on an image to see a quality loss, I'd say the upscaling works very well. AMD would do well though, to release a few uncompressed screenshots from actual gameplay and from a game that is not readily blurry.
data/avatar/default/avatar01.webp
I guess i should hold my old 1080ti then 😛
https://forums.guru3d.com/data/avatars/m/180/180832.jpg
Moderator
Nvidia has been into deep learning for so long, it would be hard for AMD to catch up. I guess this is the best next thing as an alternative. And while the capture is from a youtube video which has tons of compression , I honestly do not think it will ever be as good as Nvidia's method right now.
data/avatar/default/avatar21.webp
Not that impressive but I think judging this kind of stuff with still images is a little misleading. No one is going to catch minor details while playing an high FPS game. Te check if FidelityFX is doing any good we should have 3 videos of the same game scene, one with everything maxed 4K, one with same settings and FidelityFX ON and one with no FidelityFX but lowered settings to achive the same FPS achived with FidelityFX. This way you could see if FidelityFX is acutally NOTICIABLY better than just lowering same settings here and there.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
WhiteLightning:

Nvidia has been into deep learning for so long, it would be hard for AMD to catch up. I guess this is the best next thing as an alternative. And while the capture is from a youtube video which has tons of compression , I honestly do not think it will ever be as good as Nvidia's method right now.
It won't. I did try DLSS in cyberpunk I think I found that the best quality and second best was acceptable rest introduced quite a bit of artifacting performance being the worst ofc. So I am hopeful this doesn't mess up too bad and ends up somewhere close.
https://forums.guru3d.com/data/avatars/m/229/229454.jpg
WhiteLightning:

Nvidia has been into deep learning for so long, it would be hard for AMD to catch up. I guess this is the best next thing as an alternative. And while the capture is from a youtube video which has tons of compression , I honestly do not think it will ever be as good as Nvidia's method right now.
Perhaps not, remains to be seen. However I want to make a point about enlarging an image from a compressed source to show a quality "loss" is hardly an accurate measure. If AMD can produce solid image quality with their solution, mission accomplished I'd say, be it equal or inferior to DLSS.
beedoo:

Personally, I think that if you're going to take 100+ consecutive images from a 100 FPS game and look at them individually for 'razor sharpness' then you're a bloody idiot.
And taking those images from a video of a game is even bloodier.
https://forums.guru3d.com/data/avatars/m/229/229454.jpg
ViperAnaf:

kind of like how they branded a standard PCIE spec feature (resizable bar) as "smart access"
And as soon as they did, NVIDIA started adding support for it too. Makes you think donnit...
https://forums.guru3d.com/data/avatars/m/277/277158.jpg
WhiteLightning:

Nvidia has been into deep learning for so long, it would be hard for AMD to catch up.
I think you're being wholly irresponsible by saying this.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
Both are like fancy pink ribbon on a woodleg...
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
AsiJu:

And as soon as they did, NVIDIA started adding support for it too. Makes you think donnit...
Even more that the feature were launch and documented years before they get interested in it... "If you do it, it might be interesting for me too... speed up, i have to buy a new leather vest"
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
beedoo:

/sigh, you don't need AI hardware to predict anything. You people really need to get that in your heads.
lol, yes you do.
https://forums.guru3d.com/data/avatars/m/180/180832.jpg
Moderator
The better AI hardware will be to predict things, the more data and faster the results will be.
https://forums.guru3d.com/data/avatars/m/277/277158.jpg
Astyanax:

lol, yes you do.
All you've done is shown you don't have a clue either...
WhiteLightning:

The better AI hardware will be to predict things, the more data and faster the results will be.
That's the only thing you've said that marginally makes sense. Better hardware will make it faster - that's about it. Outside of this, everything you need to do to replicate DLSS can be done in software.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
you don't need rt cores to do rt either see ? gtx1080ti runs it too https://i.imgur.com/enGoy0t.jpeg
beedoo:

Better hardware will make it faster - that's about it.
and no hardware will make it...... ?
beedoo:

everything you need to do to replicate DLSS can be done in software.
to identical final result and performance ? interesting cause i've never seen any publication comparing hardware and software dlss,but if you claim the results are the same...I mean you gotta have data confirming it,you're just not willing to share it.