Microsoft Eying DirectML as DLSS alternative on Xbox

Published by

Click here to post a comment for Microsoft Eying DirectML as DLSS alternative on Xbox on our message forum
data/avatar/default/avatar23.webp
Yees please. Looks great. But we already knew that AI image reconstruction is capable of great quality. Minimizing performance hit is the key issue. https://creativecoding.soe.ucsc.edu/QW-Net/ " Recently, a combined real-time image reconstruction technique called Deep Learning Super Sampling (DLSS) [Liu 2020] was introduced, but the details of the underlying network are unknown. Concurrent to our work, Xiao introduced a reconstruction technique based on U-Net. Using an optimized inference implementation they reconstruct a 1080p image in 18 to 20 ms on a high-end GPU. In comparison, DLSS reconstructs a 4K image in under 2 ms. Both these approaches can reconstruct images at a higher resolution than the input render. " https://abload.de/img/dlah1otjf2.png
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
To be clear, DirectML is a substitute for NGX, not DLSS. DLSS is an application built on top of Nvidia's NGX. Just like whatever AI based upscaler AMD/Microsoft will build will be an application on top of Win/DirectML.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Thank frack that M$ is stepping up where AMD hasn't yet.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
So better quality at the cost of latency and performance.
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
Stormyandcold:

So better quality at the cost of latency and performance.
At the cost of performance? Not quite the DLSS alternative.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
We have been hearing about a DLSS type API for AMD's new cards, wonder if this is it?
data/avatar/default/avatar04.webp
Neo Cyrus:

Thank frack that M$ is stepping up where AMD hasn't yet.
In short - YES. AMD needs to step up. MS has been "eying" DirectML image upscaling at least since 2018. But MS can't do it alone. In order to have acceptable perf. HW partners need to provide arch. specific optimizations. The images from Forza demo you're seeing in the article have been done using use Nvidia tensor flow model converted to DirectML and further accelerated using Nvidia's specific optimizations. https://abload.de/img/dag231knp.png https://abload.de/img/dfg5kxkkh.png
https://forums.guru3d.com/data/avatars/m/274/274789.jpg
wow i miss no aa days. taa must be destroyed
Noisiv:

Yees please. Looks great. But we already knew that AI image reconstruction is capable of great quality. Minimizing performance hit is the key issue. https://creativecoding.soe.ucsc.edu/QW-Net/ " Recently, a combined real-time image reconstruction technique called Deep Learning Super Sampling (DLSS) [Liu 2020] was introduced, but the details of the underlying network are unknown. Concurrent to our work, Xiao introduced a reconstruction technique based on U-Net. Using an optimized inference implementation they reconstruct a 1080p image in 18 to 20 ms on a high-end GPU. In comparison, DLSS reconstructs a 4K image in under 2 ms. Both these approaches can reconstruct images at a higher resolution than the input render. " https://abload.de/img/dlah1otjf2.png
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Should MS succeed, this could make Xbox a more appealing platform. Sure, PS5 gets the better load times, but I doubt Xbox is going to load slow enough to annoy people.
Neo Cyrus:

Thank frack that M$ is stepping up where AMD hasn't yet.
Right... because developing an AI to basically come up with lost information by itself is totally something they can pull off in just 2 years.... I'm sure DLSS was in development for a while, especially since Nvidia wasn't exactly in a rush since the idea of it didn't really exist. The first iteration of it was unappealing enough that many didn't care to use it. So if Nvidia had all the time in the world and a chip dedicated to processing it and they still didn't yield ideal results, I find it rather unreasonable for AMD to come up with a compelling response in a timely manner. It's not a matter of them "stepping up", the problem is this isn't a simple task.
Stormyandcold:

So better quality at the cost of latency and performance.
Consoles tend to play games at 30FPS. The latency is already garbage. So really, the latency difference shouldn't be noticed - it should just yield better graphical detail.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
schmidtbag:

Consoles tend to play games at 30FPS. The latency is already garbage. So really, the latency difference shouldn't be noticed - it should just yield better graphical detail.
I don't agree, the latency will be noticeable enough to feel like it's playing streaming content. Imho, this tech needs hw support, period. There's a reason why Nvidia did it their way with hw, they've obviously done all the tests. This is also why they're willing to donate it to DirectML, because they already know it runs like crap, while running great on their hardware. Totally sound decision business-wise. Now, if Switch Pro (obvious use-case) supports DLSS, then, low-powered hardware could produce much better image-quality...
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
schmidtbag:

Right... because developing an AI to basically come up with lost information by itself is totally something they can pull off in just 2 years....
Yes, and they knew of it far far longer than 2 years, they were just betting on it being a fad which was stupid. I don't think you understand how enormous AMD are even if they are tiny compared to the others. They most definitely could have and should have had something by now. DLSS matters so much at this point that it makes people buy nVidia instead of waiting for AMD. My local PC shop telling me that (so far) there is no pre-ordering of 6800 XTs combined with CP 2077 being weeks away and supporting DLSS made me snap and order a 3080, despite how much I loathe nVidia in recent years, and their filthy 10GB BS granting me a nice 1GB downgrade in capacity. It'll be arriving most likely next week. Off topic-ish, but a question for anyone who knows: Are GPUs binned at all anymore? I don't think they are as nVidia just craps out dies without the binning they used to do, and I haven't heard of the partners doing binning lately. I went full retard and ordered the TUF 3080 OC, $50 over the base model and $100 over the MSRP of the 3080 FE, because I have no idea how much of a difference the different BIOS will make, when proper flashing tools will be available if it matters, and if there is any binning whatsoever. It already costs a stupid amount, I wasn't about to cheap out on $50 on a gamble that they're the same level of GPU die... which they should be. Basically feels like I flushed $50 down the toilet.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Neo Cyrus:

Yes, and they knew of it far far longer than 2 years, they were just betting on it being a fad which was stupid. I don't think you understand how enormous AMD are even if they are tiny compared to the others. They most definitely could have and should have had something by now.
Well, if the first version was anything to have gone by, they'd have been right. AMD is large but they struggle to make competent drivers for Windows and they've had other woes with Ryzen AGESAs. On top of that, they struggle to have hardware that can compete against Nvidia's performance. So, I'm not sure why you had expectations they would have the resources to develop an AI when for the past 2 years they've been playing catch-up on multiple fronts.
DLSS matters so much at this point that it makes people buy nVidia instead of waiting for AMD. My local PC shop telling me that (so far) there is no pre-ordering of 6800 XTs combined with CP 2077 being weeks away and supporting DLSS made me snap and order a 3080, despite how much I loathe nVidia in recent years, and their filthy 10GB BS granting me a nice 1GB downgrade in capacity. It'll be arriving most likely next week.
You do not represent everyone. DLSS is one of the best ideas Nvidia came up with but most people don't see it as a dealbreaker, because it doesn't work for everything. You're better off paying more to reliably get more performance, than to pay less for something that might have great performance with slightly compromised quality. Don't get me wrong, I want AMD to get DirectML working ASAP too, I just think you have very unrealistic demands for a company that is only just recently able to actually compete against Nvidia in an apples-to-apples comparison.
Off topic-ish, but a question for anyone who knows: Are GPUs binned at all anymore?
Yes, they are. That's how you get the OC and super OC variants from AIB partners. But GPUs are so complicated these days that even if you have 2 of the same model from the same brand, you'll get different results.
Basically feels like I flushed $50 down the toilet.
Probably. At least you didn't flush $250 by buying from a scalper.
https://forums.guru3d.com/data/avatars/m/278/278874.jpg
The story is simple to understand, they've done the job on the CPU side with CPUs dropping in a nearly 1 year cycle. They're going to try to do the same on the GPU cycle. Let them 2 or 3 years before maybe leading nVidia by a small inch (beginning at RDNA1 release) RDNA 1 is July 2019, RDNA 2 is november 2020 (doubling or almost doubling the performance while increasing the consumption by roughly 50%), RDNA 3 should be very late 2021 or early 2022, and might give them an edge on nVidia until Hopper is released. nVidia was on a 2 year cycle between each generation, AMD might brake the performance crown if they can follow the same pattern and roadmap execution from the CPU side. nVidia did have a weird launch this year, and the fact their communication is weirdly handled also seems to mean they didn't expected AMD to be that close (yeah those 3070ti or 3080 20Gb). nVidia did a fair job improving performance each generation (while improving efficiency by a lot), introducing ray tracing and DLSS. But it is still tied up to only a few games. Giving some more times for AMD to catch up in these 2 specifics parts. And AMD did a pretty good job from the efficiency perspective in 2 GPU generation. Supporting DXML/Super Resolution is not a bad thing rather than developping its own solution, it's open, it's bound to DirectX. It should work with more games than DLSS could and game developers won't have to bother supporting both. They will go right for DirectML and nVidia will support it through Tensor Core. There's only the question about what Vulkan/Khronos gonna do.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
fuck that. I'm not gonna use fake resolutions. #purist #nofakepixels
https://forums.guru3d.com/data/avatars/m/255/255510.jpg
I'm liking the look of that. Bring on the textures and ray tracing.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
schmidtbag:

only just recently able to actually compete against Nvidia in an apples-to-apples comparison.
It's not apples to apples though, still different node. Nvidia clearly miss-calculated and should've stuck with TSMC. What we're seeing is apples and oranges, with AMD being on a smaller node, it would've been a disaster for AMD if they still couldn't compete even with that advantage.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Strange Times:

wow i miss no aa days. taa must be destroyed
Those days when games had 32x32 textures at most? Days where AF was buggy and effect changed depending on surface angle? Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia? I do not miss those days. Sure, I had GPU OCed by 40% and had quatro softmod. But I miss actual ability to get AA of my choice into any game. And I agree, TAA in almost all forms it was implemented to date should die. Per pixel IQ loss just to get rid potential shimmering in game that does not suffer from it anyway... Then someone comes and makes IQ comparison of bad TAA against DLSS. At least here, it is shown for what it is in terms of SNR. TAA often much worse than no-AA image rendered at lower resolution. TAA on Zengarden example is plain "WTF?". Would someone take those cutoffs and shown each to random 1000 people, most of people would fail to recognize greenery after TAA messed it all up. And paradoxical fact is that SSIM tells you that TAA image is 99.3% similar. While no-AA is 96.4% similar to reference and looks much better. I think that they can use something like early discard. Because unprocessed image had reasonable IQ on most surfaces and needs help mainly with edges while each of those methods blur surfaces to some degree. It is nice that they used 256x SSAA for reference images. But when downsampling method is not sharp enough to preserve fine detail, reference image is nothing great to write about.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
I never liked TAA first time I saw it was in SkyrimSE and I had it on , till I realized it blured thing in motion so turn it off and went back FXAA which the only Shader based AA (and if done proper I think is really good) i like still TXAA was supposed to fix blur issue with TAA but I never see it so i dont know I Think MSAA is myth at this point I rarely if ever see these days in game options, most are shader based at this point
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@Fox2232 Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia? No no no no ! Amd had just cpus ! "Days when Matrox had better IQ than ATI and ATI had better IQ than nVidia?" There fixed ! Hehe i just had to brother i am pretty sure since you know that about that time force of habit made you write amd 😛
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Venix:

@Fox2232 Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia? No no no no ! Amd had just cpus ! "Days when Matrox had better IQ than ATI and ATI had better IQ than nVidia?" There fixed ! Hehe i just had to brother i am pretty sure since you know that about that time force of habit made you write amd 😛
Sure I do. It is deformation by now. Told people that ATi is no more and there is now AMD so many times, that I simply have them as one entity 🙂