AMD's FSR 3 to remain Exclusive to Company's Graphics Cards

Published by

Click here to post a comment for AMD's FSR 3 to remain Exclusive to Company's Graphics Cards on our message forum
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
when will this finally see even a 10s snippet ?
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Well, I'm curious to see how 4 interpolated frames instead of 1 are dealing with ghosting (which at times is a major issue with all those techs).
data/avatar/default/avatar34.webp
fantaskarsef:

Well, I'm curious to see how 4 interpolated frames instead of 1 are dealing with ghosting (which at times is a major issue with all those techs).
Much worse then only using 1 frame, thats for sure. Any mis-prediction will be far more obvious when only 1/5th of the frames are "real", rather then half of them.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
More fake frames for everyone! Yummy...
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
Can't wait for AMD to block out DLSS3/XeSS in more games
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
nevcairiel:

Much worse then only using 1 frame, thats for sure. Any mis-prediction will be far more obvious when only 1/5th of the frames are "real", rather then half of them.
I suspect the same. Effectively, if you have 1 extra frame after every originally rendered one, it's already at least 50% "interpolated" content / information, right? Imagine this at say 4/5 or 80% (!) interpolated information... I could imagine that they are using 1 generated frame for low natively rendered FPS, but ultimately if you're running 100fps rendered, they can easily make it 400 fps with interpolated ones? Because nobody sees a single frame here? Maybe that's what they are trying to get at?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Im glad FSR3 will be amd cards only.
data/avatar/default/avatar09.webp
fantaskarsef:

Well, I'm curious to see how 4 interpolated frames instead of 1 are dealing with ghosting (which at times is a major issue with all those techs).
I am curious as well though don't think ghosting will be the primary issue. With more fake frames (not AI generated) you would think it should get further from developers artistic intent, and why only up to 4 frames and not more. Edit: Second thoughts about ghosting ... someone mentioned driver side frame generation might mean no motion vector in the equation though maybe FSR3 has a way to handle this.
https://forums.guru3d.com/data/avatars/m/243/243189.jpg
Krizby:

Can't wait for AMD to block out DLSS3/XeSS in more games
What goes around comes around
https://forums.guru3d.com/data/avatars/m/50/50906.jpg
fantaskarsef:

I could imagine that they are using 1 generated frame for low natively rendered FPS, but ultimately if you're running 100fps rendered, they can easily make it 400 fps with interpolated ones? Because nobody sees a single frame here? Maybe that's what they are trying to get at?
I think that's the gist of it. If your machine can only do 20 fps then you have the option of: - Adding 10 fps more: it will look a bit less chunky (though VRR should also help) - Adding 20 fps more: it will make it better. The amount of "real" frames is still 20. - But adding 80 fps will make it even smoother, while the 20 "real" frames is maintained. I think the basic requirement is that the "real" frames don't get reduced, and that the "created" ones actually "interact" with user input, so as to get rid of the lag/sticky feeling. PD: I'm assuming that the AI processor that comes with the newest APUs such as the 7940HS will help FSR3 do its job better.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
fantaskarsef:

Well, I'm curious to see how 4 interpolated frames instead of 1 are dealing with ghosting (which at times is a major issue with all those techs).
Maybe that's why it's AMD-specific - it doesn't really make sense why this otherwise couldn't be platform-agnostic. Of course, I could be wrong, and considering AMD's answer to DLSS, I'm likely to be wrong.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
heffeque:

I think that's the gist of it. If your machine can only do 20 fps then you have the option of: - Adding 10 fps more: it will look a bit less chunky (though VRR should also help) - Adding 20 fps more: it will make it better. The amount of "real" frames is still 20. - But adding 80 fps will make it even smoother, while the 20 "real" frames is maintained. I think the basic requirement is that the "real" frames don't get reduced, and that the "created" ones actually "interact" with user input, so as to get rid of the lag/sticky feeling. PD: I'm assuming that the AI processor that comes with the newest APUs such as the 7940HS will help FSR3 do its job better.
Well that would make sense, only that I have little hope that "we" can actually interact with generated frames... aren't they just interpolated due to math, while interaction (input in FPS games for instance) would mean they would have to be rendered frames to register e.g. mouse movement or clicks?
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
schmidtbag:

Maybe that's why it's AMD-specific - it doesn't really make sense why this otherwise couldn't be platform-agnostic. Of course, I could be wrong, and considering AMD's answer to DLSS, I'm likely to be wrong.
I honestly have to say I have no issue with it being done for AMD only, even if it would technically work on Intel or Nvidia as well. They put the work in (read: paid engineering hours), they should reap the benefits (read: sales of GPUs). That said, I can't see why it would be technically necessary except it does delve into deep architectural hooks... which I doubt tbh. For all of those DLSS and FSR or XESS, I doubt that.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
heffeque:

I think that's the gist of it. If your machine can only do 20 fps then you have the option of: - Adding 10 fps more: it will look a bit less chunky (though VRR should also help) - Adding 20 fps more: it will make it better. The amount of "real" frames is still 20. - But adding 80 fps will make it even smoother, while the 20 "real" frames is maintained. I think the basic requirement is that the "real" frames don't get reduced, and that the "created" ones actually "interact" with user input, so as to get rid of the lag/sticky feeling. PD: I'm assuming that the AI processor that comes with the newest APUs such as the 7940HS will help FSR3 do its job better.
The issue is that the fake frames are generated only on the GPU. So there is no input from the player while the GPU is rendering those frames. That's why DLSS3 has higher input lag than with real frame rate. If AMD somehow makes it have a 1:2 ratio or 1:4 ratio of real vs fake frames, then it's going to be a slog to play. And it would feel stuttery to play. Not to mention that having fake frames so long in the screen will make artifacts much more obvious.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
heffeque:

I think that's the gist of it. If your machine can only do 20 fps then you have the option of: - Adding 10 fps more: it will look a bit less chunky (though VRR should also help) - Adding 20 fps more: it will make it better. The amount of "real" frames is still 20. - But adding 80 fps will make it even smoother, while the 20 "real" frames is maintained. I think the basic requirement is that the "real" frames don't get reduced, and that the "created" ones actually "interact" with user input, so as to get rid of the lag/sticky feeling. PD: I'm assuming that the AI processor that comes with the newest APUs such as the 7940HS will help FSR3 do its job better.
With so many BS frames being generated, my 6950XT is going to end up faster than a 4090!... 😱o_O:D
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
fantaskarsef:

I honestly have to say I have no issue with it being done for AMD only, even if it would technically work on Intel or Nvidia as well. They put the work in (read: paid engineering hours), they should reap the benefits (read: sales of GPUs).
Well, the article seems to imply the code is still open-source. The notion of reaping benefits from making something proprietary only works if you control a large percentage of the market, which AMD does not. AMD's (and Intel's) focus on open platforms actually reaps more benefits to them, because: A. In the event they have the superior platform, they can look good on the charts. It's easy for Nvidia to look good with things like CUDA and DLSS when they're just competing against themselves, especially when there is no real competing technology (OpenCL never garnered enough attention, because AMD was lazy and Intel didn't have a GPU powerful enough to warrant optimization). B. An open platform can be used by a wider market; if your developer resources are limited and you can only choose one technology, it makes more sense to pick the technology that applies to the widest audience possible, even if it isn't the best. Of course, this somewhat backfired for AMD since even though FSR can be adopted by anyone, Intel and Nvidia have no incentive to do so. Since FSR can be enabled without developer efforts, it makes more sense for devs to just focus on DLSS, since that doesn't work with just any game (and because Nvidia controls a majority of the GPU market). C. They can mooch off the efforts of others, thereby reducing their own development costs. This has actually been greatly advantageous for AMD in the Linux world - not only have competitors (like Intel) developed code that AMD uses (granted, it works the other way around too) but Valve has actually done quite a lot to team up with AMD to improve their drivers. These are not small changes either - Valve has yielded great framerate increases. If AMD wasn't open with their drivers, Valve probably wouldn't have teamed with with them to make the Deck. Having said all that, if FSR3 requires developers to implement it, we can basically consider it DOA, no matter how good it is.
That said, I can't see why it would be technically necessary except it does delve into deep architectural hooks... which I doubt tbh. For all of those DLSS and FSR or XESS, I doubt that.
I don't think the architecture so much the issue, but rather, it's likely very specific in the rendering process. For example: You don't want a HUD or any kind of on-top-of-everything text to be frame interpolated. If you were to use a generic library to do frame interpolation, it might just interpolate the entire frame (which seems to be what Nvidia's approach does), because each driver has its own way of interpreting the instructions. I'm sure this varies drastically from game to game, but in a lot of cases, I'm sure all the driver has to do is see what assets are being loaded closest to the camera and render those on top of each interpolated frame, or perhaps which assets are rendered with an orthographic projection. In other words, by making this work at the driver level, you can more easily control what is or isn't interpolated.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
schmidtbag:

Well, the article seems to imply the code is still open-source. The notion of reaping benefits from making something proprietary only works if you control a large percentage of the market, which AMD does not. AMD's (and Intel's) focus on open platforms actually reaps more benefits to them, because: A. In the event they have the superior platform, they can look good on the charts. It's easy for Nvidia to look good with things like CUDA and DLSS when they're just competing against themselves, especially when there is no real competing technology (OpenCL never garnered enough attention, because AMD was lazy and Intel didn't have a GPU powerful enough to warrant optimization). B. An open platform can be used by a wider market; if your developer resources are limited and you can only choose one technology, it makes more sense to pick the technology that applies to the widest audience possible, even if it isn't the best. Of course, this somewhat backfired for AMD since even though FSR can be adopted by anyone, Intel and Nvidia have no incentive to do so. Since FSR can be enabled without developer efforts, it makes more sense for devs to just focus on DLSS, since that doesn't work with just any game (and because Nvidia controls a majority of the GPU market). C. They can mooch off the efforts of others, thereby reducing their own development costs. This has actually been greatly advantageous for AMD in the Linux world - not only have competitors (like Intel) developed code that AMD uses (granted, it works the other way around too) but Valve has actually done quite a lot to team up with AMD to improve their drivers. These are not small changes either - Valve has yielded great framerate increases. If AMD wasn't open with their drivers, Valve probably wouldn't have teamed with with them to make the Deck. Having said all that, if FSR3 requires developers to implement it, we can basically consider it DOA, no matter how good it is. I don't think the architecture so much the issue, but rather, it's likely very specific in the rendering process. For example: You don't want a HUD or any kind of on-top-of-everything text to be frame interpolated. If you were to use a generic library to do frame interpolation, it might just interpolate the entire frame (which seems to be what Nvidia's approach does), because each driver has its own way of interpreting the instructions. I'm sure this varies drastically from game to game, but in a lot of cases, I'm sure all the driver has to do is see what assets are being loaded closest to the camera and render those on top of each interpolated frame, or perhaps which assets are rendered with an orthographic projection. In other words, by making this work at the driver level, you can more easily control what is or isn't interpolated.
I agree with you that it only works if you got the bigger market share, so devs actually have an incentive to adopt it. The thing is though, to sell GPUs, it's better to have a solution other copanies can't use (if it's technologically superior / on the same level). So that's why I think, if they are doing better than Nvidia in that regard, they should try to protect that advantage. I fancy how these days outlets like DF do in depth comparisons for the different methods, since there you can at least see how the tech is adapted in those games. I just think that AMD could make it a proprietary tech (only to be used with AMD hardware) and still have devs work on it... I don't see how that contradicts itself. B. only comes into consideration if you have either the better tech, or offer to take off the work from devs... which ends up being what they all do these days. But that's also how Nvidia got there were it is, and Jensen for a long time knew that and Nvidia has done that for the last 10 years at least. But it's a question of cash to pay engineering hours to "donate" them to devs. But you are right in the regard that it makes sense to concentrate on the bigger market share GPU. Although, iirc, isn't DLSS just a tick box to activate with UE, for example? If so, again, if both FSR and DLSS are just to "activate" with no work needed, you need the better tech (image quality, stable frame rate, less ghosting, more FPS gain) to actually score a win and get an advantage. C. I completely agree. I agree with you that taken the full frame and interpolate gives them issues, like we see with HUDs, like you mentioned. Somewhat, DLSS3 has improved (saw a video of DF on that topic), but it did not improve in every game (which shows us it's probably a game by game adaption still). But that's the thing, if the GPU has to look into the frame again, and check for what's rendered how and where, wouldn't it be more like VRS (variable rate shading) than true frame interpolation? Not that this could still be a good thing, but it would require an extra step in calculations, and thus, at least a tiny bit of performance.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
fantaskarsef:

I agree with you that it only works if you got the bigger market share, so devs actually have an incentive to adopt it. The thing is though, to sell GPUs, it's better to have a solution other copanies can't use (if it's technologically superior / on the same level). So that's why I think, if they are doing better than Nvidia in that regard, they should try to protect that advantage.
Not necessarily - what matters is having a compelling offer. While in some cases that would be having a [noteworthy] technology that competitors don't have, it isn't limited to that. In this particular context, Nvidia already has frame interpolation, so AMD isn't really doing anything particularly exclusive, unless of course their implementation is better.
I fancy how these days outlets like DF do in depth comparisons for the different methods, since there you can at least see how the tech is adapted in those games. I just think that AMD could make it a proprietary tech (only to be used with AMD hardware) and still have devs work on it... I don't see how that contradicts itself.
The problem comes down convincing devs it is worth their time, even if it is better. No matter how good something like FSR 3 is, it won't be enough to convince people to switch. AMD needs to pull something revolutionary and do it well if they expect something proprietary to be a financial success.
B. only comes into consideration if you have either the better tech, or offer to take off the work from devs... which ends up being what they all do these days. But that's also how Nvidia got there were it is, and Jensen for a long time knew that and Nvidia has done that for the last 10 years at least. But it's a question of cash to pay engineering hours to "donate" them to devs. But you are right in the regard that it makes sense to concentrate on the bigger market share GPU. Although, iirc, isn't DLSS just a tick box to activate with UE, for example? If so, again, if both FSR and DLSS are just to "activate" with no work needed, you need the better tech (image quality, stable frame rate, less ghosting, more FPS gain) to actually score a win and get an advantage.
Exactly - and that's where FSR 3 becomes questionable, because it probably isn't going to be better, and AMD tends to not really work with game devs directly anywhere near as much as Nvidia does. As for DLSS, the AI-generated stuff isn't a tick box, but I think the rest of it is.
I agree with you that taken the full frame and interpolate gives them issues, like we see with HUDs, like you mentioned. Somewhat, DLSS3 has improved (saw a video of DF on that topic), but it did not improve in every game (which shows us it's probably a game by game adaption still). But that's the thing, if the GPU has to look into the frame again, and check for what's rendered how and where, wouldn't it be more like VRS (variable rate shading) than true frame interpolation? Not that this could still be a good thing, but it would require an extra step in calculations, and thus, at least a tiny bit of performance.
I'm not really an expert on how the drivers or the rendering process works but I imagine it doesn't have to be too complicated. It's basically just checking whether something should be rendered until later. Of course, every calculation has an impact on performance, but in this case I imagine it'd be nearly negligible.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
[youtube=psd55fzacrc]