Quick comparison: Geforce GeForce 378.66 versus 378.78 DirectX 12 performance

Published by

Click here to post a comment for Quick comparison: Geforce GeForce 378.66 versus 378.78 DirectX 12 performance on our message forum
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
*cof* "33% improvement" *cof*
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
eh I dont see differences it all looks to be with in margin of error maybe they forgot to actual put the improvements "in" the drivers?
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
*cof* "33% improvement" *cof*
"Up to" I'm guessing, pair a 1080Ti or whatever with a weaker CPU and it might show more substantial gains from this optimization but eh I have no idea what the driver side of things look like for DX12, wasn't it meant to move over to the developers. (As expected multi-GPU supported lagging behind but there are some gains here and there for both AMD and Nvidia GPU's even if overhead problems for AMD probably makes it look like a bigger win for AMD with DX12 or how to say.) I do like what DOOM did with Vulkan though but I'm using a AMD GPU and well for D3D11 on AMD you "only" have a variable driver overhead issue with CPU usage or some such whereas OpenGL 4.x is well it seems OGL (All versions?) on AMD is semi-effed or something far as I've read so understandably Vulkan showed some really nice improvements if that's anywhere near true. :P As for AMD overhead I don't really know how big of a thing that is for D3D11, testing with something like a 5960X you'd think that would eliminate it quite handily but maybe not? (Then again games are just starting to benefit a bit from hexa core so eh, finally? :P ) EDIT: Nice to see D3D12 can be optimized via drivers of course, whether it's AMD or Nvidia, gains are always a plus. 😀
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Nvidia was also comparing it to the 1080 release driver.. which definitely makes the 16% slightly misleading. But whatever, I'll take any increase I can get.
data/avatar/default/avatar03.webp
I'll be getting the 1080ti once 3rd party cards with better cooling arrives . Until then, Can someone confirm any performance boost with this driver vs the older one with maxwell cards? Like 970,980, 980ti and titan x?
https://forums.guru3d.com/data/avatars/m/242/242134.jpg
So guru is comparing it to a different driver than what Nv used, only to turn around and say the gains are not there. Right.. I mean, why not do both, a recent one and the old driver ? This way we can check on Nvs claim, but also compare the gains vs a newer release.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
So guru is comparing it to a different driver than what Nv used, only to turn around and say the gains are not there. Right.. I mean, why not do both, a recent one and the old driver ? This way we can check on Nvs claim, but also compare the gains vs a newer release.
Because it's been 10PM here and at one point I do need some free time? Perhaps I'll have a peek tomorrow and compare to the launch driver as yes, obviously everybody is never updating drivers and are still using a May 2016 drivers, right?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Yeah, I don't get why Nvidia compared it to the launch driver.. unless it was just out of convenience because they already had the launch driver fps numbers in a spreadsheet somewhere or something.
data/avatar/default/avatar32.webp
They seem to have ironed out the issues with Hitman, everything else seems within the margin of error.
However, the question is what's the fps in DX11 on a gtx 1080? Because if DX11 is still faster (for nvidia cards obviously, I know AMD does get a nice boost with DX12 in Hitman) than this, than it's not really an improvement. It's just slightly less worse.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
Using a "normal" CPU should probably show actual improvements. Top-end enthusiasts won't see much benefit. But "average joe" gamers with normal CPUs should. And I think that's the area that should be tested. This has been the case with previous drivers already. Switching from OGL to Vulkan for Doom 4 for example on a Sandy Bridge i5 makes the game go from 110FPS to 150FPS. Doing the same with top-end CPUs reportedly doesn't result in improvements (and in some cases people report slight performance drops even.)
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
I was expecting this driver. Already knew what it was compared to, but, it's not enough Nvidia. Try harder. <------ Greedy b*****d.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
NVIDIA is now supposedly wholeheartedly in the whole low level API thing, I wonder what that would mean for titles they sponsor. Probably nothing I have a feeling.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
NVIDIA is now supposedly wholeheartedly in the whole low level API thing, I wonder what that would mean for titles they sponsor. Probably nothing I have a feeling.
It won't get marketed like crazy until Volta. The Pascal architecture just isn't all there. :/
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
It won't get marketed like crazy until Volta. The Pascal architecture just isn't all there. :/
Yeah, it's just not all there. Okay.
data/avatar/default/avatar01.webp
These results are much different than mine. I saw much larger improvements. My fps nearly doubled in some parts of the ashes of singularity benchmark. Still testing out other games. What cpu was being used for these tests?
https://forums.guru3d.com/data/avatars/m/183/183421.jpg
What cpu was being used for these tests?
fast clocked (8 cores @ 4.3 GHz) 16 thread processors (5960X)
data/avatar/default/avatar25.webp
So guru is comparing it to a different driver than what Nv used, only to turn around and say the gains are not there. Right.. I mean, why not do both, a recent one and the old driver ? This way we can check on Nvs claim, but also compare the gains vs a newer release.
http://cdn.mos.cms.futurecdn.net/PagR5uepvmCDmhWMgXdU8o-650-80.png I don't agree with nvidia's testing methodology of seeing gains across titles compared to the GTX 1080 launch that took place on May, last year. Fact is, some of these games have seen numerous updates from the developer's side too. The executable they used to evaluate the performance initially is not the same executable that's being used to compare with this driver. That's a huge variable. And you'd have to be living under a rock to own a GTX 1080 and have stuck to using the launch drivers and haven't updated since. Rise Of The Tomb Raider didn't have a DirectX 12 patch till July, and it took them quite a few patches after the initial DirectX 12 patch to get things working. In May last year, when the GTX 1080 launched, there wasn't a DirectX 12 Rise Of The Tomb Raider codepath available, at least not to the public. Tom Clancy's The Division got it's DirectX 12 support in last December so how did nvidia get those DirectX 12 numbers back in May with their GTX 1080 launch drivers is beyond me. Same goes for Gears Of War 4 which launched in October 11, 2016. I highly doubt that the executable for Ashes Of The Singularity that existed back in May, 2016 hasn't accommodated any updates to this date that didn't make performance improvements. Even if you install the launch GTX 1080 driver which probably isn't a good idea on a Windows 10 Anniversary installation, some of these games nvidia have mentioned to have improved upon won't have a working "Gameready" profile within the driver itself and might crash or glitch. :infinity:
data/avatar/default/avatar37.webp
fast clocked (8 cores @ 4.3 GHz) 16 thread processors (5960X)
Well that explains everything. I have to say this is not what I would expect from a professional site like guru3d. Obviously the results are going to be better on more cpu limited setups.
data/avatar/default/avatar04.webp
Well that explains everything. I have to say this is not what I would expect from a professional site like guru3d. Obviously the results are going to be better on more cpu limited setups.
http://i.imgur.com/7FcGZsZ.png Come again? It's professional of a site to bench GPUs under a CPU bottlenecked scenario, right? Hilbert should have opted for a Core i3 2100 instead judging from your opinion.