AMD: There is no such thing as full support for DX12 today

Published by

Click here to post a comment for AMD: There is no such thing as full support for DX12 today on our message forum
https://forums.guru3d.com/data/avatars/m/94/94596.jpg
Moderator
This all pretty pathetic and just proves my point in regards to pc gamers being so hypocritical. Then theres the absolutely childish behaviour and utter delusion from a large portion of amd loyalist. The competition has been leading they way for how long now and without question, still is, constantly leaving amd to play catch up. Just one meaningless amd sponsored benchmark, the competitions lack of support for a feature that has been and still is of no real use on an api that is still meaningless until we have actual games, and you smell blood and this is how you act. ****ing disgraceful.
Have you read what you just said in context?
https://forums.guru3d.com/data/avatars/m/200/200207.jpg
Have you read what you just said in context?
Not quite sure what you mean think he has a point to be honest.... This whole dx12 things smacks of unsubstantiated claims and trolling then people jumping on the bandwagon thinking they can try and give credibility to what is most likely made up garbage. There may be some issues but until dx12 is launched officially I could claim anything... Wait are they trying to influence buying habits or its just petty "your gpu cant do this so there...my games are gonna get 5fps improvement over yrs". or "my hair/dust particles are gonna look amazing".
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
Have you read what you just said in context?
****, wrong thread. loooool.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Lol ... noobs keep changing the chart. The primary DX12 feature level chart can be found at the link below and is modified daily by people in the field. https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D
While this is inconvenient that some active fella changes it over again, he only adds stuff to it, and he is not altering originally present information. He added Async shaders, cross-node sharing, UAVs at every stage, maximum sample count for UAV-only rendering, Logical Blend Operations. Well, and he removed MS's reference. But checking what he added, they are mostly same for AMD and nV, so it does not look like he is taking sides. (Tho, I would state for async shader on nV as partial, since it works. Just not as intended => may mean poor implementation.)
data/avatar/default/avatar32.webp
Something to look, theres a bit of problem on the MS front. Many features who was optional for 12_1 have suddenly moved as essential, and even some features as 12_0 who was optionnal have then suddenly moved back to non optionnal or vice versa... ( let alone the problem with Tier1-2-3 ) who move up and down from needed to optionnal. Maybe MS should have look at the 12_0-12_1 a bit differently and take all features supported by all, and then put the other on the optional side. Even today, there's not one driver from AMD, Intel and Nvidia who expose all features available on their architecture .. Each driver we find new cap-bit who was not exposed or only partially. example: Cat 15.8beta: - Geometry Shader bypass performance cap-bit finally exposed.
data/avatar/default/avatar01.webp
While this is inconvenient that some active fella changes it over again, he only adds stuff to it, and he is not altering originally present information. He added Async shaders, cross-node sharing, UAVs at every stage, maximum sample count for UAV-only rendering, Logical Blend Operations. Well, and he removed MS's reference. But checking what he added, they are mostly same for AMD and nV, so it does not look like he is taking sides. (Tho, I would state for async shader on nV as partial, since it works. Just not as intended => may mean poor implementation.)
Currently there is only one source of the table and that is at Wiki. A slew of experts modify the Wiki table from MS, Intel, Nvidia and AMD on a daily basis. Do you really think his "cut & paste" interpretation of what the table should look like is in some way superior to theirs? Considering his resulting table it's understandable why he did not include a link to the source.
https://forums.guru3d.com/data/avatars/m/227/227042.jpg
It is precisely these sort of things that make me appreciate the stupidity of the typical consumer. You buy something and expect it to support 100% a feature which goes live a year later, and even then it's insignificant because the games won't even exist for some time after that. But the "green team" and "red team" go on the defensive/offensive muddying the issue with feelings and uneducated opinions. Yeah not much quantitative facts in my post either, but seriously, can your GPU play current gen games perfectly? Yes. Does AMD show advantage is one single AMD-preferred demo? Absolutely. Does it matter at all?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Lol that reddit thread, did you guys even read the comments? Seems like no one even knows how to interpret the results including the guy who wrote the program.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Really i thought i read that Maxwell is meant to be able to do 32, while GCN 1.0 is 2, GCN 1.1 is 4 and GCN 1.2 is 8.
It is 1 + 2 mixed and 1 + 8 and 1 + 8 and compute only 2, 8 and 8 (Tho 260 series gcn 1.1 seems to be 1 + 2 while 290 are 1 + 8). But yes maxwell should have 1 + 31 mixed and 32 for compute only.
data/avatar/default/avatar24.webp
Lol that reddit thread, did you guys even read the comments? Seems like no one even knows how to interpret the results including the guy who wrote the program.
The program was an attempt to produce some data, is not definitive and believe was done by a graphics developer (a lot of folk there are game developers where the program originated) -- it's basically a work in progress and may continue to be so well after the first DX12 game (Fable Legends) is released in Oct. 2015.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
The program was an attempt to produce some data, is not definitive and believe was done by a graphics developer (a lot of folk there are game developers where the program originated) -- it's basically a work in progress and may continue to be so well after the first DX12 game (Fable Legends) is released in Oct. 2015.
I'm familiar with Beyond3D. I'm just pointing out that there is a huge argument going on in that thread about how to interpret the results of that program. So posting the reddit thread here and saying anything definitive about the results is misleading.
https://forums.guru3d.com/data/avatars/m/134/134194.jpg
I'm familiar with Beyond3D. I'm just pointing out that there is a huge argument going on in that thread about how to interpret the results of that program. So posting the reddit thread here and saying anything definitive about the results is misleading.
I agree but it is interesting
https://forums.guru3d.com/data/avatars/m/251/251394.jpg
I bought a 960 because of DX 12. Damn, could've gotten a Refurb 290 or New 280X. No real difference now, is it? And now the AMD cards will get even more powerful because of better async support and low latency, consoles will have a boost too. #PCMR will end soon.
https://forums.guru3d.com/data/avatars/m/128/128096.jpg
Does any of this even matter? Seriously, you guys are at each others neck over HYPOTHETICAL DETAILS! FFS, even if Nvidia did support every godamn thing in DX12(_1) and AMD only supported the most basic things, it wouldn't really matter too much today. These architectures are literally a year old at this point (even older if you take into account they are revisions), and will be two years old by the time some real DX12 games come around; not to mention for how much longer they were in development before the final DX12 spec. If you bought a year old architecture that were both known to have varying levels of support right now to play hypothetical games a year from now with future features, then you are just throwing away your money. Pascal is what you should be screaming at if it doesn't support nearly everything, same goes with whatever AMD's next architecture is.
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
Pascal is like a ghost,nobody knows nothing.And HBM2 is still on Amd&Hynix hands. Until will come on the market we still have old "gen" DX11.2/DX12_0 cards which are still good.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I'm familiar with Beyond3D. I'm just pointing out that there is a huge argument going on in that thread about how to interpret the results of that program. So posting the reddit thread here and saying anything definitive about the results is misleading.
That is because they have a lot of tests for compute running alone from 1 to 128 depth. Then there is just ONE value for graphical rendering time, so one should not over look it. And finally 1~128 test for both Graphics + compute. Proper display for those data are 3 graphs: 1st: showing 1~128 for compute only ( lowest values & best case scenario if async shaders were magical) 2nd: 1~128 theoretical result with taking Graphical rendering time and adding to it values from 1st graph (results in highest values and worst case scenario where execution is not done in parallel at all) 3rd: showing 1~128 values for compute+graphical running at once (real world result comparable with best/worst case scenario) And I would add 4th graph showing percent difference for each 1~128 value where it would show how much it went from worst case to best case scenario.