AMD Teases FidelityFX Super Resolution 3.0 at GDC 2023: What You Need to Know
Click here to post a comment for AMD Teases FidelityFX Super Resolution 3.0 at GDC 2023: What You Need to Know on our message forum
reslore
So they're making fake interpolated frames like Nvidia. Probably more blur on top of the upscaling blur. I am not interested in fake frames that give motion sickness and strange artifacts.
Kaarme
mikeysg
I've not used FSR in any of the games I play on my main rig, but with my 2nd rig with the RX 6900 XT hooked up to a 4K TV, FSR would inevitably be needed. Played DSR with FSR set to 'Quality' and it does help. bringing framerate to playable at 55fps and higher (will have to check if I'd had RTAO enabled). For a game like DSR, even mid 40fps is playable, and since I don't pixel peep like Steve of DF, it looks pretty dang good!
So, understandably, I'm curious as to what FSR3 brings to the table in terms of PQ and framerate, and I hope it runs well on the RX 6000/5000 series of AMD cards at the very least.
Neo Cyrus
wavetrex
nVidia: You need to buy our fancy new ADA 4000 series GPUs to get fake frames 😎
AMD: We have fake frames too, but our fake frames work on Ryzen 3 2200G 😀
pegasus1
Option 1 - Buy a top end card and play 4k max eyecandy without this technology and without compromise.
or
Option 2 - Buy a mid range card and play 4k max eyecandy with this technology and with some compromise.
Or sit in your mums basement eating crisps and tell multi billion dollar tech companies they are doing it all wrong.
#justsaying
GoldenTiger
Nvidia DLSS3 frame generation works really well by user reports I've read so long as you start with 60 or so fps. Amd as usual copies innovation and is way late with much less adoption.
They said h1 2023 when they launched their 7900XT/7900XTX but here we are in late March and it's "too early to show"? Bleh.
DLSS3 already confirmed for a ton of titles with more to be announced, is this going to be the same as with DLSS 2 where FSR adoption lags far behind Nvidia?
cucaulay malkin
cucaulay malkin
pegasus1
I've never used any of this tech to be honest, the most graphically demanding game I currently play is Meteo EE and at 4k and full RT with max settings really flys along. Even CP2077 does 60fps according to benchmarks so until something more demanding comes along il not be using it.
Kaarme
schmidtbag
I think frame generation is fine in some cases, it just depends on the game. Same goes with supersampling, playing a game at 60FPS rather than 240FPS, playing a game at 1080p with AA vs 4K, and so on. When you have to make sacrifices, there is no one-size-fits-all solution for a better experience. Obviously if we all had our way, we'd be playing everything at native 4K+ with AA and 300FPS of genuine frames.
I predict something like stackable GPU cores. The high frequencies we see today would have to be lowered for each additional layer but by being able to effectively double the compute power per square millimeter is a big deal. I presume this would be easier to implement than a chiplet design, and in fact might work even better since data has less distance to travel. It would be more expensive to manufacture, but ones facilities are equipped to do this regularly, costs would likely go down.
Otherwise, I predict things will turn out like the old days, where people have to just simply learn how to stop being so lazy about coding. While I have griped many times here in the past about optimizing software to have a smaller disk and memory footprint, devs have also been super lazy about taking advantage of writing code to use fewer cycles. Sometimes performance losses come from things as simple as using a char variable/field to store an integer, or as complex as utilizing more hardware instructions. Usually though, it's just making code simpler, like doing 1+1+1+1=4 rather than just 1*4=4. Obviously that's a pretty stupid example but it's just to demonstrate how there's multiple ways to skin a cat but many of them are a lot worse than others. There have been times I've reduced code to 1/4 of its original size while retaining the exact same functionality. You may ask "how do you know this is a prevalent issue?" and the answer speaks for itself: if software has a lot of bugs/glitches, hacky hotfixes, memory leaks, or any warnings while compiling, those developers could not have possibly spent enough time writing more efficient code.
I guess it is worth pointing out that you can have rock solid software that is very inefficient, but it's not possible to have unstable software that is very efficient. There's a lot of unstable software out there.
CPC_RedDawn
TimmyP
Extrapolation. If it was interpolation, frames would be double in every case, look like ****, and it would have been incorporated a long time ago.
wavetrex
Just so everyone here knows, this has existed for a while: https://nmkd.itch.io/flowframes
And in the most recent versions, it's really, really good, the glitches are very minimal and usually only occur when fixed text or logos are displayed over video.
AMD might use a similar algorithm but with game movement vectors instead of determining video movement vectors by looking at several frames, and they don't need to worry about sharp text glitching since the UI is added after the AI interpolation.
Ah yea, and it works with Vulkan, so GPU agnostic.
fellix
wavetrex
FSR 2.x already has access to the rendering pipeline, and most likely 3.x will as well.
I for one can't wait to try it on my "outdated" 6800 XT and see what this game frame generation fuss is all about.
Undying
H83
It seems AMD and Intel have no option but to copy Nvidia`s features, even if some of them don`t make much sense...
Everyone, kneel before the true power of marketing!!!:(
SpajdrEX