Microsoft Eying DirectML as DLSS alternative on Xbox

Published by

Click here to post a comment for Microsoft Eying DirectML as DLSS alternative on Xbox on our message forum
https://forums.guru3d.com/data/avatars/m/274/274789.jpg
I was referring to games from 2016 and earlier
Fox2232:

Those days when games had 32x32 textures at most? Days where AF was buggy and effect changed depending on surface angle? Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia? I do not miss those days. Sure, I had GPU OCed by 40% and had quatro softmod. But I miss actual ability to get AA of my choice into any game. And I agree, TAA in almost all forms it was implemented to date should die. Per pixel IQ loss just to get rid potential shimmering in game that does not suffer from it anyway... Then someone comes and makes IQ comparison of bad TAA against DLSS. At least here, it is shown for what it is in terms of SNR. TAA often much worse than no-AA image rendered at lower resolution. TAA on Zengarden example is plain "WTF?". Would someone take those cutoffs and shown each to random 1000 people, most of people would fail to recognize greenery after TAA messed it all up. And paradoxical fact is that SSIM tells you that TAA image is 99.3% similar. While no-AA is 96.4% similar to reference and looks much better. I think that they can use something like early discard. Because unprocessed image had reasonable IQ on most surfaces and needs help mainly with edges while each of those methods blur surfaces to some degree. It is nice that they used 256x SSAA for reference images. But when downsampling method is not sharp enough to preserve fine detail, reference image is nothing great to write about.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
first thing that comes to my mind - if ML was an option,why did nvidia sink so much money into r&d and allocated whole server farms for dlss. seems astoundingly wasteful even if the goal was to make it propietary.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Stormyandcold:

It's not apples to apples though, still different node. Nvidia clearly miss-calculated and should've stuck with TSMC. What we're seeing is apples and oranges, with AMD being on a smaller node, it would've been a disaster for AMD if they still couldn't compete even with that advantage.
Nodes hardly make a difference these days, especially when you consider Samsung's is only off by 1nm (maybe not even that much, depending how you measure node sizes). People make fun of Intel's 14nm+++++ but when they don't compensate their lack of innovation for more clock speed, their efficiency is actually still very competitive against AMD. The main reason to shrink the node now has more to do with squeezing more product into a single wafer. So if you really want to get pedantic, it's more like an apples to pears comparison.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
my only concern is that unlike DLSS, DML implementations take 20ms.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
schmidtbag:

it's more like an apples to pears comparison.
OK, I can accept that.
data/avatar/default/avatar20.webp
Well, comparing to bi-linear is like comparing your new 2020 car against original Ford T model.... I really don't understand the appeal of this kind of tech. Used as an AA for SAME resolution screens sounds reasonable as many other techniques with their own drawbacks. But spending GPU power to upscale a lower res render instead of using it to actually render things? If that actually "solves" anyone's problem, I think it screams that the problem is self-generated/silly in the first place (not saying it would be easy to solve, but doable). Problems like... a really really bad driver<->API stack, really aged rendering tech, screens with resolutions above what the users can actually see from a coach, etc.. I rather have nvidia/amd/ms/vulkan-consortium working full steam on more and better differential shading rate, better ray tracing integration, etc.