AMD Announces DirectX 12 Game Engine Developer Partnerships

Published by

Click here to post a comment for AMD Announces DirectX 12 Game Engine Developer Partnerships on our message forum
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
hehe rest assured, heavy asynchronous compute inside 🙂
https://forums.guru3d.com/data/avatars/m/242/242371.jpg
hehe rest assured, heavy asynchronous compute inside 🙂
I find it kinda funny that AMD GPU owners called foul with games using lots of tessellation, now this. Well if you can't beat them join them, it's only fair.:)
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I find it kinda funny that AMD GPU owners called foul with games using lots of tessellation, now this. Well if you can't beat them join them, it's only fair.:)
hehe very friendly worded, but something true about this.
data/avatar/default/avatar12.webp
And why you sound concerned .nvidia already stated there gpu fully support asynchronous compute.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
And why you sound concerned .nvidia already stated there gpu fully support asynchronous compute.
Made me smile 😀
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I find it kinda funny that AMD GPU owners called foul with games using lots of tessellation, now this. Well if you can't beat them join them, it's only fair.:)
I'll call foul play against customer any day where you place 80 thousands polygons into completely flat surface. Or if game developer uses like 10 polygons per pixel. And you should too, because while nVidia GPU owners were not hit that hard by that "marketing strategy", their performance was still degraded.
data/avatar/default/avatar28.webp
they forgot to mention crytek and ce 5
data/avatar/default/avatar21.webp
This is nice propaganda, by looking only at one site 😀
data/avatar/default/avatar35.webp
they forgot to mention crytek and ce 5
What about Crytek and Cry Engine 5?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I'll call foul play against customer any day where you place 80 thousands polygons into completely flat surface. Or if game developer uses like 10 polygons per pixel. And you should too, because while nVidia GPU owners were not hit that hard by that "marketing strategy", their performance was still degraded.
I guess I kind of agree if devs are doing that, but tessellation, when used correctly, is pretty powerful. It reduces memory bandwidth as it reduces vertex data. It reduces the number of draw calls in a scene and it allows for scalable LODs. Like most people see something like HairWorks with tessellation and go "kills performance, nothing like Async which actually speeds it up" but what if Hairworks didn't use tessellation? It would probably be like 3-4x the performance hit. Conservative Rasterization is another one. There was a thread the other day where, again, people were saying that it's not as important as Async because it doesn't speed up graphics. And yet Nvidia uses it to do real time raytraced shadows in The Division. Which I don't think has ever been done before and they look great as well. So yeah, if devs are smothering it everywhere, then screw em, but if they use it correctly, those techniques definitely add detail while either having minimal performance hit or actually increasing performance in some cases. I will also say though I disagree with what they said about Async. It should be used as long as its being used properly. If it's detracting from Nvidia's performance, then AFAIC it's not being used correctly. Just disable it all together on Nvidia's hardware in that case. Async is as good as a tool as any other and it should be utilized.
What about Crytek and Cry Engine 5?
Pretty sure AMD is partnered with them too.
data/avatar/default/avatar16.webp
What a punch on NVIDIAs face, but i feel that too early or too late, hopping to see some reaction, but i don't want games with thousands of people just to hit performance on Nvidia... Thats just as bad as their practice.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I guess I kind of agree if devs are doing that, but tessellation, when used correctly, is pretty powerful. It reduces memory bandwidth as it reduces vertex data. It reduces the number of draw calls in a scene and it allows for scalable LODs. Like most people see something like HairWorks with tessellation and go "kills performance, nothing like Async which actually speeds it up" but what if Hairworks didn't use tessellation? It would probably be like 3-4x the performance hit. Conservative Rasterization is another one. There was a thread the other day where, again, people were saying that it's not as important as Async because it doesn't speed up graphics. And yet Nvidia uses it to do real time raytraced shadows in The Division. Which I don't think has ever been done before and they look great as well. So yeah, if devs are smothering it everywhere, then screw em, but if they use it correctly, those techniques definitely add detail while either having minimal performance hit or actually increasing performance in some cases. I will also say though I disagree with what they said about Async. It should be used as long as its being used properly. If it's detracting from Nvidia's performance, then AFAIC it's not being used correctly. Just disable it all together on Nvidia's hardware in that case. Async is as good as a tool as any other and it should be utilized. Pretty sure AMD is partnered with them too.
I agree that tessellation is important, but instead of having most polygon heavy flat roadblocks in game ever, crysis could have had much higher complexity of environment (higher grass density, better trees, more clutter on ground). With W3 hairworks, problem was again not in tessellation itself, but in having it so dense, that even 4k very close-up screenshots would not show difference between x64 and x32. And that's not saying if it was actually limited to x64 to begin with, maybe they used x128 originally. At 1080p I could see difference between x16 and x8 if I moved camera close enough. As you said, technologies have to be used smartly to improve, not to cripple. And result was bad anyway, half of people who could run hairworks at decent fps still disabled it as it looked weird on Geralt and on many animals it looked plain wrong and creepy (nothing like those cinematic screenshots).
https://forums.guru3d.com/data/avatars/m/123/123974.jpg
Nice marketing AMD. From reading this article it would sound like the only feature of DX12 is asynchronous compute.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Nothing special.. Only DeusEx looks interesting and that was a Amd game in dx11 too, but ran perfectly fine on nv.. If not even better later on.
data/avatar/default/avatar35.webp
What about Crytek and Cry Engine 5?
They have become amd exclusive gpu partner for dx 12 and vr ergo monster hunter online is amd now and possibly king of wushu
data/avatar/default/avatar19.webp
Nothing special.. Only DeusEx looks interesting and that was a Amd game in dx11 too, but ran perfectly fine on nv.. If not even better later on.
well you are a bit butt-hurt
data/avatar/default/avatar34.webp
They have become amd exclusive gpu partner for dx 12 and vr ergo monster hunter online is amd now and possibly king of wushu
Exclusive partner for DX12? king of wushu ... Where do you get this stuff, linkie? https://developer.nvidia.com/content/nvidia-gameworks-chinas-kung-fu-games All I can find is AMD partnering with Crytek for VR First (Should probably be called 6th of eight 😀) Remember AMD partnering with Square Enix to deliver Rise of Tomb Rider Gameworks/VXAO/HBAO+? Well that's nothing compared to AMD partnering with Dell and Occulus for VR ready machines. With every single PC that Occulus and Dell are offering being Nvidia/Intel powered: https://www.oculus.com/en-us/oculus-ready-pcs/
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
what i like the most it is AMD logo on the box and NVidia when the game start... (and even game bought with redeem code from AMD) it is just propaganda and nearly every major game use both the same way... on this and that point it work better with NVidia and on this and that other point it work better with AMD... but at the end it is almost the same.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
well you are a bit butt-hurt
he shouldn't, i still play DX9 stuff and enjoy them 🙂 i preffer a good game in DX9 or DX11 than a bad one in DX12, the quality is not on the higher version of DX used.
https://forums.guru3d.com/data/avatars/m/242/242371.jpg
I'll call foul play against customer any day where you place 80 thousands polygons into completely flat surface. Or if game developer uses like 10 polygons per pixel. And you should too, because while nVidia GPU owners were not hit that hard by that "marketing strategy", their performance was still degraded.
What I meant was both will do whatever they can to be King given the chance. Not saying it's good for customers, but both do it even though one company ever gets the blame.