Codemasters Integrates AMDs Image Sharpening Tech in F1 2019

Published by

Click here to post a comment for Codemasters Integrates AMDs Image Sharpening Tech in F1 2019 on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

I don't think anything is necessary until it is - I mean at some point a caveman got tired of being a caveman when everyone else was fine with it, right?
Sure, sometimes. But most of the time, it's the exact opposite. Technology is invented because it's a solution to a problem. If early homo sapiens didn't come up with tools, we'd have lost out to the Neanderthals, who were more physically resilient. Neanderthals went extinct because our intelligence allowed us to do things they couldn't, and as a result, they were unable to adapt and compete. DLSS isn't a solution to a problem, it's a solution looking for a problem. Meanwhile, something like real-time raytracing is a solution to a problem, because we've pretty much reached the limits of what computer graphics can do without it. Those limits cause 3D scenes to never be quite visually realistic. It's maybe 99% there, but that 1% is the difference between realizing you're in a virtual space vs questioning what is reality.
We had a technology, machine learning, that was being used for all kinds of image processing - so Nvidia decided to pivot that technology into games. They built an SDK, which does more than just DLSS and said "let's let developers see what they can do with machine learning in video games". It could have lead to better AI, DLSS could have been gods gift to gaming, it could lead to things I can't even think about - it could lead to nothing. Either way I'm a fan of innovation - it drives the industry forward. NGX drove Microsoft to create DirectML - AMD already is talking about DLSS alternatives on DirectML. So regardless to whether it was necessary at the start or not it's pushing developers out of caves and potentially starting something great.
Don't get me wrong, innovation is crucial. Like I said before, it's what gave rise to modern humans. But it doesn't matter how fancy or complex a technology is if the implementation is impractical for what it intends to improve. I could give dozens of examples of products that seemed revolutionary or really creative, but failed to take off because a simpler technology could get the job done faster, easier, more reliably, with fewer resources, or at a lower cost. Technology for the sake of technology is not a good reason to make something. So, not only do I welcome machine learning and AI, I think it is a necessity for humans to reach whatever our next evolutionary state in life is (which IMO is interplanetary/interstellar travel). I think Nvidia's investment in stuff like tensor cores is fantastic and I look forward to what can be accomplished with it. But, DLSS is not a step forward, it's basically just leaning forward, with the feet standing in the same position. It's a woefully overcomplicated tool (and expensive, if it's the only part of the ML you're taking advantage of) that not many people have access to, just to yield a visual improvement that can really only be appreciated when you stop what you're doing and stick your face right up the to screen. I'm not saying DLSS shouldn't exist, but rather justifying why it isn't grabbing anyone's attention.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Denial:

DLSS is built on NGX - DirectML is essentially an alternative framework to NGX. Nvidia had to build NGX because DirectML didn't exist. The rest of the stuff you wrote I'm aware of but doesn't have anything to do with NGX vs DirectML or the argument of whether Nvidia could have done DLSS in a way that AMD could benefit. I don't think anything is necessary until it is - I mean at some point a caveman got tired of being a caveman when everyone else was fine with it for millions of years. We had a technology, machine learning, that was being used for all kinds of image processing - so Nvidia decided to pivot that technology into games. They built an SDK, which does more than just DLSS and said "let's let developers see what they can do with machine learning in video games". It could have lead to better AI, DLSS could have been gods gift to gaming, it could lead to things I can't even think about - it could lead to nothing. Either way I'm a fan of innovation - it drives the industry forward. NGX drove Microsoft to create/finish DirectML - it drove AMD to pursue DLSS alternatives on DirectML. So regardless to whether it was necessary at the start or not it's pushing developers out of caves and potentially starting something great. This is something that Nvidia has done for a while now regardless to whether gamers like them or not. AMD's GPUOpen alternatives are all in response to Gameworks. AMD wasn't talking about Freesync or using vblank to sync frames to the GPU prior to Gsync. AMD wasn't investing in ray collision hardware until Nvidia pushed RTX. Obviously it goes both ways, with AMD pushing Nvidia on certain things (Async Compute) - I wish Nvidia would be open with their tech and libraries but regardless it's still innovative and drives the industry forward.
I can agree with everything except that AMD did not worked on RT till nVidia introduced their RTX. AMD did work on HW level implementation and had patents before nVidia did their RTX introduction. Their patent on use of upgraded TMUs is from December 2017, and it is apparent that you 1st have to work things out before you go and patent.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Fox2232:

I can agree with everything except that AMD did not worked on RT till nVidia introduced their RTX. AMD did work on HW level implementation and had patents before nVidia did their RTX introduction. Their patent on use of upgraded TMUs is from December 2017, and it is apparent that you 1st have to work things out before you go and patent.
Eh they worked on stuff but Nvidia definitely telegraphed it all first: https://patents.google.com/patent/US20160071313A1 https://patents.google.com/patent/US20160071310A1 Nvidia filed their patents for BVH acceleration in 2015. RTTrace existed in Optix in 2016 (ray intersects on GPU). They had AI denoising in iRay in 2017. RTX as a brand sure - AMD had stuff before that but Nvidia's work on getting real time raytracing has been a thing since around 2013-2014. And don't get me wrong AMD does a lot of cool innovative things first and typically when they follow up on something Nvidia did they always do it better, in a way that's more developer friendly - I just think a lot of people write Nvidia's innovations off because they are proprietary but the reality is they really push the industry forward with all that.
https://forums.guru3d.com/data/avatars/m/66/66219.jpg
Free sharper IQ is awesome but... even though I normally notice these sorts of things well, that is a really tiny difference, almost not even noticeable. Maybe its not a good example?
https://forums.guru3d.com/data/avatars/m/268/268516.jpg
[QUOTE="I just think a lot of people write Nvidia's innovations off because they are proprietary but the reality is they really push the industry forward with all that. Fully agree on this they are proprietary unfortunately but yeah they push everything forward with raytracing especially... been a long time coming... come on AMD... 🙁