AMD Could Do DLSS Alternative with Radeon VII through DirectML API
There is an interesting development, as you know the GeForce RTX graphics cards have tensor cores, dedicated cores for optimizing and accelerating anything AI and Deep Learning. A spawn from that technology for gaming is DLSS, which NVIDIA is marketing strongly. With the October 2018 update for Windows 10, Microsoft has released the DirectML API for DirectX 12.
ML stands for Machine Learning - and makes it possible to run trained neural networks on any DX12 graphics card. Basically, the Deep Learning optimization or algorithm if you will, becomes a shader that can be run over the traditional shader engine, without a need for tensor cores. In an interview with AMD they mention that the team is working on this and it seems, Radeon VII seems very well suited for the new API. Japanese website 4gamer.net spoke with AMD marketing manager Adam Kozak, AMD is currently testing DirectML with Radeon VII and was positively impressed by the results, and that is exciting news for AMD offering them an AI/DL alternative.
While AMD is testing this on Radeon VII, logic obviously dictates that it would work well on Radeon RX Vega 64 and Radeon RX Vega 56 as well. This allows, for example, an alternative implementation to Nvidia's DLSS.
Only 1+1=2
Of course, should this become an actual supported thing, then it can't be addressed AMD alone, this question remains: will game developers actually implement, experiment and integrate support for DX ML into games?
It also has to be said, it works reversed, DirectML could also make use of Nvidia's Tensor cores, certainly giving them an advantage. As the Radeon card would see a performance hit, whereas the RTX cards can simply offload the algorithm towards its Tensor cores. Time will tell, but this certainly is an interesting development as long as your graphics card is powerful enough, of course.
AMD Could take Back 30% of the Processor Market - 09/26/2018 07:55 AM
Good times for AMD. Intel is under a lot of scrutinies lately, scandals with their top-tier staff, issues with 14nm production, delays at 10nm and vulnerabilities are only a few of them. Meanwhile...
AMD Could Potentially Get 19B investment - 02/11/2015 10:48 AM
More news from AMD today as it seems the company might get a nice cash injection. Loongson Technology, a CPU joint venture between Beijing-based chip designer BLX IC Design Corp, the Chinese Academy ...
AMD could be restructuring once more - 10/06/2014 10:04 AM
BSN reported earlier that AMD might be preparing one more round of restructuring, a pretty significant one as well. The reorganization would be announced later this month. Sources claim the restruct...
AMD could ship 28nm GPUs in December - 10/18/2011 10:45 AM
We can confirm this rumor as AMD has been 'carefully' talking about it. AMD might still be planning to introduce some 28nm GPUs in the second week of December. It's expected that these chips will be ...
AMD could ship 8 million Llano APUs this year - 08/05/2011 10:22 AM
AMD is doing well with Llano alright, Sources at motherboard makers told DigiTimes that AMD shipped about 1 million Llano APUs in June and 1.3-1.5 million units in July. Total shipments for 2011 are e...
Senior Member
Posts: 14278
Joined: 2014-07-21
None of what you wrote is wrong. Only that it's not what it's about... especially not in this thread. We're talking about AI algorythm execution, DLSS has nothing to do with ray tracing, so your benchmark shows little in terms of comparable performance or anything in general about DLSS.
Also, why should any 2080 user run OpenCL ray tracing in games when they have DXR / RTX... that simply doesn't make any sense as of now. Sure, if at some point OpenCL RT is the thing, Nvidia will have a problem (as it has happened with some things in the past between red and green too), but right now, this benchmark is useless besides being able to brag about it, or in professional environments (where I guess it will be more of a matchup between Nvidia's and AMD's professional cards, in which you'd have to compare Vega2's daddy to a Titan, not the 2080).
Like @dr_rus said, AMD has to "invent" a way to do DLSS first, this probably takes quite some time, then hast to work together with the game devs to test it, then hast the gamedevs to submit their game, they push it through it's AI training parcours on the big computers, then AMD is where Nvidia says they're right now. So... like I said, right now, nice to know, but this news article probably only shows it's significance in half a year from now or later.
Member
Posts: 73
Joined: 2015-03-25
DLSS is free, Tensor cores however are not :p
Senior Member
Posts: 362
Joined: 2015-06-18
That's questionable... because when not using DLSS or RTX, RT cores are worthless but at same time taking A LOT of space on die.
There are not many ways to test DLSS so far but from what GN tested in FF XV, to be honest DLSS looked considerably worse than native 4k with TAA.
Senior Member
Posts: 14038
Joined: 2004-05-16
That's questionable... because when not using DLSS or RTX, RT cores are worthless but at same time taking A LOT of space on die.
There are not many ways to test DLSS so far but from what GN tested in FF XV, to be honest DLSS looked considerably worse than native 4k with TAA.
I keep seeing this idea that RT/Tensor cores take "a lot of space up" but I really don't see any evidence of that at all. Turing has the same CUDA/mm2 as GP100 but it does it with Tensor, RT, double the cache and twice as many dispatch units and a process that's the same density. They take up space sure - they definitely don't take up "a lot of space". Regardless, I'm responding to people comparing this to Freesync vs Gsync - RPM has a fixed die cost as well that's been idle with the exception of Farcry 5 - so looking at it your way they both cost die space for a feature used in relatively little titles.
As far as quality, DLSS utilizes an autoencoder which is basically the same implementation that Microsoft demonstrated for their upscaler on DirectML early last year and will most likely be the same that AMD uses. You can tweak the weights, train longer, etc to improve quality. With only one example on a game that seems to be somewhat abandoned it's hard to say what DLSS or any AI upscaler will be like.
Senior Member
Posts: 8878
Joined: 2007-06-17
problem with this tech is you have to bake it so procedurally generated graphics may not work.
I do think its a step backwards not forwards because of its limitations .
Similar to how A.I. centric game engines need set boundaries, that would be a limitation of hardware and not necessarily ray tracing implementations. Of course, procedurally generated content could it self contain specific boundaries, negating any disadvantages.
However, I do think, it could then become a question of artistic style vs bland environments.