NVIDIA Will Fully Implement Async Compute Via Driver Support

Published by

Click here to post a comment for NVIDIA Will Fully Implement Async Compute Via Driver Support on our message forum
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
Excellent.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Just going to increase latency like the AMD one, perhaps even worse since it's partially software based. I doubt this will yield any real benefits. Then again it's not like it matters. AoS is essentially the same thing as the 3D Mark draw call test, the Fury X and the 980Ti tie in performance. Why people care about this stuff so much is beyond me. The console guys see 30% increases because those systems are already CPU starved. Go look at low processor benchmarks of AoS with fast GPUs, pcper has a good example, the worse the processor the more the difference that dx12 makes. But in the mean time you have people posting stupid bull****, including technical review sites, like the ars technia article that compares the 290x to the 980ti and circle jerks over the fact that a $300 card performs the same as a $650 one in AoS benchmark. What they fail to mention is that it also performs the same as the Fury X. But that doesn't fit the current narrative so they don't mention it. Similarly Nvidia is also ****ing stupid for not just getting an engineer to explain it at all. Have Tom Peterson sit down and just have some slides so people can understand what goes on with this stuff. AMD should also probably put a leash on some of their employees. The technical advertising guy that made all those posts on reddit is looking pretty stupid right now.
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
Just going to increase latency like the AMD one, perhaps even worse since it's partially software based. I doubt this will yield any real benefits. Then again it's not like it matters. AoS is essentially the same thing as the 3D Mark draw call test, the Fury X and the 980Ti tie in performance. Why people care about this stuff so much is beyond me. The console guys see 30% increases because those systems are already CPU starved. Go look at low processor benchmarks of AoS with fast GPUs, pcper has a good example, the worse the processor the more the difference that dx12 makes. But in the mean time you have people posting stupid bull****, including technical review sites, like the ars technia article that compares the 290x to the 980ti and circle jerks over the fact that a $300 card performs the same as a $650 one in AoS benchmark. What they fail to mention is that it also performs the same as the Fury X. But that doesn't fit the current narrative so they don't mention it. Similarly Nvidia is also ****ing stupid for not just getting an engineer to explain it at all. Have Tom Peterson sit down and just have some slides so people can understand what goes on with this stuff. AMD should also probably put a leash on some of their employees. The technical advertising guy that made all those posts on reddit is looking pretty stupid right now.
+1. If Nvidia come out on top of this with this driver, amd are going to look ridiculous. At least until heavier Async games come out, as i assume software implementation will only get Nvidia so far. If this driver doesnt make all that of a difference, Nvidia are going to have a hard time selling cards if AMD play theirs right.
data/avatar/default/avatar17.webp
Just going to increase latency like the AMD one, perhaps even worse since it's partially software based. I doubt this will yield any real benefits. Then again it's not like it matters. AoS is essentially the same thing as the 3D Mark draw call test, the Fury X and the 980Ti tie in performance. Why people care about this stuff so much is beyond me. The console guys see 30% increases because those systems are already CPU starved. Go look at low processor benchmarks of AoS with fast GPUs, pcper has a good example, the worse the processor the more the difference that dx12 makes. But in the mean time you have people posting stupid bull****, including technical review sites, like the ars technia article that compares the 290x to the 980ti and circle jerks over the fact that a $300 card performs the same as a $650 one in AoS benchmark. What they fail to mention is that it also performs the same as the Fury X. But that doesn't fit the current narrative so they don't mention it. Similarly Nvidia is also ****ing stupid for not just getting an engineer to explain it at all. Have Tom Peterson sit down and just have some slides so people can understand what goes on with this stuff. AMD should also probably put a leash on some of their employees. The technical advertising guy that made all those posts on reddit is looking pretty stupid right now.
Increasing latency ? where do you ttake that ? im hooping this is not from the little test provided on Beyond 3D, because this litttle codes was aboslutely not intended for be a benchmark at all, it was for see if Async is on or off .. It is not a code made for GCN, indeed it is a code who will in any case run really bad on any AMD gpu's .. But it was not intended for that . I dont even understand what latency you are talking about, if latency was increase this will mean it will be slower to perform the task needed, so ,, where do you take that i dont know ... The point oof Async is specially there for decrease the latency of serial compute + graphics command .. This said, i send them all my wishes for simulate an hardware scheduling by software aka driver .
data/avatar/default/avatar16.webp
I have little idea why you would chose Madigan's comments (former ATI employee) to explain something regarding Oxide and Nvidia's architecture on the main Guru3D Frontpage news. It's like asking AMD to do Nvidia's marketing .... Why not provide the complete Oxide developer's comments instead?
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
I have little idea why you would chose Madigan's comments (former ATI employee) to explain something regarding Oxide and Nvidia's architecture. It's like asking AMD to do Nvidia's marketing .... Why not provide the complete Oxide developer's comments instead? Oxide Developer comments regarding async compute.
I havent linked any of Madigans posts here, only Oxides. The two quotes in the OP are of Kollocks of Oxide. 2nd: I did, its in the link within the article i quoted from DSO. Look for " As Oxide’s developer “Kollock” wrote on Overclock.net..". Its inserted into overclock.net. Besides, im not going to place an entire post like that in a new thread as op, people can go to the source if they want more info. So can you edit your post please, what youve quoted is already linked above.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
NVidiA .... meant to be PLAYED.... Love U /// GEFORCE
data/avatar/default/avatar14.webp
Sorry, I was referring to the comment on the main Guru3d News page. I thought this was comments related to that ...
data/avatar/default/avatar08.webp
My only question, is why does Nvidia expose a feature as available in their drivers when it is not ready to be used.
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
My only question, is why does Nvidia expose a feature as available in their drivers when it is not ready to be used.
Perhaps an accident, oversight by the driver team? Who knows, Nvidia, but thats another reason why they need to make a statement on this. Staying silent isnt going to help, but, Nvidia tends to keep war of words to a minimum while amd plays the look at us card all of the time.
Sorry, I was referring to the comment on the main Guru3d News page. I thought this was comments related to that ...
Wow! im so confused. The comments in here are linked to that article on the main page. Completely different OPs but using the same comments.:infinity:
data/avatar/default/avatar06.webp
Maybe this feature is supported in Hardware like the cards support it in software it does not support it yet hence the driver update to support it. Which would make sense to me in this situation. Either way it will take game developers time to have these features in their games anyway.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Increasing latency ? where do you ttake that ? im hooping this is not from the little test provided on Beyond 3D, because this litttle codes was aboslutely not intended for be a benchmark at all, it was for see if Async is on or off .. The point oof Async is specially there for decrease the latency of serial compute + graphics command .. This said, i send them all my wishes for simulate an hardware scheduling by software aka driver .
The Beyond3D thread for that test has completely changed from an Nvidia ASync thread to an AMD latency thread. There are multiple posts/developers talking about AMD's Async latency in that thread. Is it a problem? No I don't think it is, but neither is Nvidia's solution to the same problem. Like you can sit here and say "Nvidia's implementation of ASync is wrong, its serial, etc, blah blah" it's irrelevant. It's irrelevant because in a test that sends a 1000x more draw calls you'll see in any real game, Nvidia's solution matches AMD's in performance. And lets be real, if Nvidia/Oxide are claiming that Nvidia is going to fix this in driver, does it really matter if it's software or hardware? I highly doubt Nvidia is going to release a driver that either cripples performance, or side-grades it. Again people keep talking about the implementation, they talk about the method being used, etc, but they completely ignore the final result, which is what everyone should care about -- the performance. In AOS @ 1080P through 4K the difference in performance between a 980Ti and Fury X is negligible. They essentially perform the same. And the game is literally designed to be a draw call benchmark. Its not a real insight to how DX12 will perform in an actual title. Proof of this is that the 290x and the 980 perform nearly identically to a Fury X/980Ti. Obviously this game isn't testing a graphics cards performance, it's testing the bottleneck of the scheduler. What annoys me is that aside from PC Perspective's podcast and Hilbert (through multiple posts) no one is talking about this. Everyone is just circle jerking eachother about a bunch of irrelevant bull****. 3 Days ago there was a post on reddit that was like "CONFIRMED, NVIDIA DOESN'T SUPPORT DX12 ASYNC SHADERS" now today, there is "OXIDE CONFIRMS NVIDIA SUPPORTS ASYNC". Or, the example I already given, Ars Technica writes an entire article where the conclusion is that AMD's $300 card outperforms Nvidia's $650 one. Yet they don't even ****ing investigate why or compare it to AMD's $650 offerings. Like I said in my other post, Nvidia is to blame too. They should be way more open and upfront about this stuff. Not only this -- about the Kepler bugs in Witcher 3, about the SLI issues with memory stuff (it's now fixed) about why their drivers ****ing suck for Windows 10, etc. They don't even post changelogs in their drivers anymore. The Nvidia guy posted something like "I'd have to talk to the docs team" like wtf is that. They have an entire team dedicated to documents and they release a ****ing driver with zero ****ing documents. It was like literally copied and pasted from the previous release with a find and replace to change the driver number. Tech review sites really need to step up too -- I get that some don't focus on the low end, technical stuff -- but the fact that I have to go to some forum and read through amd/nvidia fanboy nonsense for 70 pages in order to even understand what is happening, is bull****. I really miss Anand from Anandtech. The guy would have had a 10 page breakdown of the entire architecture, what's occurring, why it's occurring, with actual developer commenting on it. But since Ryan or whoever sold it to Tom's hardware, that site has literally turned into a giant ****ing billboard of advertising garbage. There is more ad space then article space on that site now. Anyway I'm ranting, but it's frustrating. People keep posting their uneducated, uninformed opinions based on what some armchair googling nerd thinks is happening. I'm not going to pretend to understand how the Async design between AMD/Nvidia's architecture differ. I'm going to pretend that I know why AMD or Nvidia's implementation is better or worse. What I do know and can see is the performance of both one specific title, that tests a specific part of the card, and they are EQUAL. So the fanboy circle jerk of "Nvidia is better" or "AMD is better" can end now.
https://forums.guru3d.com/data/avatars/m/255/255677.jpg
The Beyond3D thread for that test has completely changed from an Nvidia ASync thread to an AMD latency thread. There are multiple posts/developers talking about AMD's Async latency in that thread. Is it a problem? No I don't think it is, but neither is Nvidia's solution to the same problem. Like you can sit here and say "Nvidia's implementation of ASync is wrong, its serial, etc, blah blah" it's irrelevant. It's irrelevant because in a test that sends a 1000x more draw calls you'll see in any real game, Nvidia's solution matches AMD's in performance. And lets be real, if Nvidia/Oxide are claiming that Nvidia is going to fix this in driver, does it really matter if it's software or hardware? I highly doubt Nvidia is going to release a driver that either cripples performance, or side-grades it. Again people keep talking about the implementation, they talk about the method being used, etc, but they completely ignore the final result, which is what everyone should care about -- the performance. In AOS @ 1080P through 4K the difference in performance between a 980Ti and Fury X is negligible. They essentially perform the same. And the game is literally designed to be a draw call benchmark. Its not a real insight to how DX12 will perform in an actual title. Proof of this is that the 290x and the 980 perform nearly identically to a Fury X/980Ti. Obviously this game isn't testing a graphics cards performance, it's testing the bottleneck of the scheduler. What annoys me is that aside from PC Perspective's podcast and Hilbert (through multiple posts) no one is talking about this. Everyone is just circle jerking eachother about a bunch of irrelevant bull****. 3 Days ago there was a post on reddit that was like "CONFIRMED, NVIDIA DOESN'T SUPPORT DX12 ASYNC SHADERS" now today, there is "OXIDE CONFIRMS NVIDIA SUPPORTS ASYNC". Or, the example I already given, Ars Technica writes an entire article where the conclusion is that AMD's $300 card outperforms Nvidia's $650 one. Yet they don't even ****ing investigate why or compare it to AMD's $650 offerings. Like I said in my other post, Nvidia is to blame too. They should be way more open and upfront about this stuff. Not only this -- about the Kepler bugs in Witcher 3, about the SLI issues with memory stuff (it's now fixed) about why their drivers ****ing suck for Windows 10, etc. They don't even post changelogs in their drivers anymore. The Nvidia guy posted something like "I'd have to talk to the docs team" like wtf is that. They have an entire team dedicated to documents and they release a ****ing driver with zero ****ing documents. It was like literally copied and pasted from the previous release with a find and replace to change the driver number. Tech review sites really need to step up too -- I get that some don't focus on the low end, technical stuff -- but the fact that I have to go to some forum and read through amd/nvidia fanboy nonsense for 70 pages in order to even understand what is happening, is bull****. I really miss Anand from Anandtech. The guy would have had a 10 page breakdown of the entire architecture, what's occurring, why it's occurring, with actual developer commenting on it. But since Ryan or whoever sold it to Tom's hardware, that site has literally turned into a giant ****ing billboard of advertising garbage. There is more ad space then article space on that site now. Anyway I'm ranting, but it's frustrating. People keep posting their uneducated, uninformed opinions based on what some armchair googling nerd thinks is happening. I'm not going to pretend to understand how the Async design between AMD/Nvidia's architecture differ. I'm going to pretend that I know why AMD or Nvidia's implementation is better or worse. What I do know and can see is the performance of both one specific title, that tests a specific part of the card, and they are EQUAL. So the fanboy circle jerk of "Nvidia is better" or "AMD is better" can end now.
I wanna hug and kiss you. #nohomo But what you said is the absolute truth beyond fanboyism and jumping into conclusions. Everyone is rushing to make conclusions nowadays. Also I agree 100% about Nvidia downfalls lately. Their communication totally failed. It began with the GTX970 memory fiasco and after that they did what you described. And yes they stopped even posting a changelog inside their driver's release notes PDF. I suspect that almost everything is broken now and they decided to conceal all those weak spots by not publishing them. I feel that you can hardly find the truth from any tech site these days.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
With a comment like the worse a cpu is the more difference DX12 makes, it is not seen that comparisons will be made between less expensive gear and the top enthusiast gear? Leaving AMD and Nvidia aside, DX12 along with Gsync and Freesync allow mainstream gear appear to have the same visual experience as top enthusiast gear. If this trend keeps going, top gear like FurryXs and Titians will not have a place in the market. Phones and tablets may even have a shot of becoming the new need to have gaming rigs. The only thing left to differentiate the experience will be the display device used. GPUs and CPUs will be rendered moot. One will just end up needing good enough.
data/avatar/default/avatar37.webp
Even though Nvidia documents are lacking with regard to the specifics of async compute or other DX12 features, the link below gives a good idea of the concept. It is currently assumed this feature works the same on Maxwell 2 hardware as it does on Kepler/Maxwell 1 hardware so keep in mind it may or may not function the same once official documentation is released. The article (written 3 days ago) should at least provide a good starting point if you are interested in the concepts. Remove space after wcc. http://wcc ftech.com/nvidia-amd-directx-12-graphic-card-list-features-explained/4/
data/avatar/default/avatar11.webp
With a comment like the worse a cpu is the more difference DX12 makes, it is not seen that comparisons will be made between less expensive gear and the top enthusiast gear? Leaving AMD and Nvidia aside, DX12 along with Gsync and Freesync allow mainstream gear appear to have the same visual experience as top enthusiast gear. If this trend keeps going, top gear like FurryXs and Titians will not have a place in the market. Phones and tablets may even have a shot of becoming the new need to have gaming rigs. The only thing left to differentiate the experience will be the display device used. GPUs and CPUs will be rendered moot. One will just end up needing good enough.
Uh....DX12 is somewhat helping to alleviate the CPU 'bottleneck'. It doesn't eliminate it nor does this single bench likely reflect the future of gaming. Phones and tablets cannot give you the immersive experience of a desktop gaming rig. Angry birds and Arkham Knight are different gaming paradigms and will remain that way. Good enough is a subjective term. My 'good enough' is probably a bit higher than others 'good enough'.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
The Beyond3D thread for that test has completely changed from an Nvidia ASync thread to an AMD latency thread. There are multiple posts/developers talking about AMD's Async latency in that thread. Is it a problem? No I don't think it is, but neither is Nvidia's solution to the same problem. Like you can sit here and say "Nvidia's implementation of ASync is wrong, its serial, etc, blah blah" it's irrelevant. It's irrelevant because in a test that sends a 1000x more draw calls you'll see in any real game, Nvidia's solution matches AMD's in performance. And lets be real, if Nvidia/Oxide are claiming that Nvidia is going to fix this in driver, does it really matter if it's software or hardware? I highly doubt Nvidia is going to release a driver that either cripples performance, or side-grades it. Again people keep talking about the implementation, they talk about the method being used, etc, but they completely ignore the final result, which is what everyone should care about -- the performance. In AOS @ 1080P through 4K the difference in performance between a 980Ti and Fury X is negligible. They essentially perform the same. And the game is literally designed to be a draw call benchmark. Its not a real insight to how DX12 will perform in an actual title. Proof of this is that the 290x and the 980 perform nearly identically to a Fury X/980Ti. Obviously this game isn't testing a graphics cards performance, it's testing the bottleneck of the scheduler. What annoys me is that aside from PC Perspective's podcast and Hilbert (through multiple posts) no one is talking about this. Everyone is just circle jerking eachother about a bunch of irrelevant bull****. 3 Days ago there was a post on reddit that was like "CONFIRMED, NVIDIA DOESN'T SUPPORT DX12 ASYNC SHADERS" now today, there is "OXIDE CONFIRMS NVIDIA SUPPORTS ASYNC". Or, the example I already given, Ars Technica writes an entire article where the conclusion is that AMD's $300 card outperforms Nvidia's $650 one. Yet they don't even ****ing investigate why or compare it to AMD's $650 offerings. Like I said in my other post, Nvidia is to blame too. They should be way more open and upfront about this stuff. Not only this -- about the Kepler bugs in Witcher 3, about the SLI issues with memory stuff (it's now fixed) about why their drivers ****ing suck for Windows 10, etc. They don't even post changelogs in their drivers anymore. The Nvidia guy posted something like "I'd have to talk to the docs team" like wtf is that. They have an entire team dedicated to documents and they release a ****ing driver with zero ****ing documents. It was like literally copied and pasted from the previous release with a find and replace to change the driver number. Tech review sites really need to step up too -- I get that some don't focus on the low end, technical stuff -- but the fact that I have to go to some forum and read through amd/nvidia fanboy nonsense for 70 pages in order to even understand what is happening, is bull****. I really miss Anand from Anandtech. The guy would have had a 10 page breakdown of the entire architecture, what's occurring, why it's occurring, with actual developer commenting on it. But since Ryan or whoever sold it to Tom's hardware, that site has literally turned into a giant ****ing billboard of advertising garbage. There is more ad space then article space on that site now. Anyway I'm ranting, but it's frustrating. People keep posting their uneducated, uninformed opinions based on what some armchair googling nerd thinks is happening. I'm not going to pretend to understand how the Async design between AMD/Nvidia's architecture differ. I'm going to pretend that I know why AMD or Nvidia's implementation is better or worse. What I do know and can see is the performance of both one specific title, that tests a specific part of the card, and they are EQUAL. So the fanboy circle jerk of "Nvidia is better" or "AMD is better" can end now.
Tbh I've been thinking you are the most sensible guy here. And that thought still goes strong. Very good post man very good.
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
IMO, DX12 is still a new concept. All worried about current Maxwell or Fury cards... They'll sux compared to "REAL DX12" cards that will be released in 1-2 years when there will be a lot of games supporting native DX12.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
Uh....DX12 is somewhat helping to alleviate the CPU 'bottleneck'. It doesn't eliminate it nor does this single bench likely reflect the future of gaming. Phones and tablets cannot give you the immersive experience of a desktop gaming rig. Angry birds and Arkham Knight are different gaming paradigms and will remain that way. Good enough is a subjective term. My 'good enough' is probably a bit higher than others 'good enough'.
Point taken.