GeForce 399.07 WHQL driver download

Videocards - NVIDIA GeForce Windows 10 | 11 671 Updated by Hilbert Hagedoorn

Click here to post a comment for GeForce 399.07 WHQL driver download on our message forum
https://forums.guru3d.com/data/avatars/m/164/164785.jpg
Cheers bud 🙂
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
weird...just got an incompatibility error on latest win 10(gtx 1070 laptop... said the OS was incompatible..weirdness but ya'know i'm going to give it another go...this time on desktop (1080ti)
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
Thank you for the heads up Boss !
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
tunejunky:

weird...just got an incompatibility error on latest win 10(gtx 1070 laptop... said the OS was incompatible..weirdness but ya'know i'm going to give it another go...this time on desktop (1080ti)
o.k. so far it seems that the "windows insider" builds have an incompatibility issue...will check further in depth later. can't do desktop (1080ti) yet as it's the start of the workday
https://forums.guru3d.com/data/avatars/m/240/240605.jpg
Thanks boss.
https://forums.guru3d.com/data/avatars/m/215/215813.jpg
Might as well wait for the hotfix that will come out tomorrow 😛
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
RavenMaster:

Might as well wait for the hotfix that will come out tomorrow 😛
😱 😱:D:D;)
data/avatar/default/avatar07.webp
i find a tendency for nvidia users to stay silent about the problems they have. keep it under wraps and hide any problems to disguise their experience to give the impression all is well when in fact its a crazy mess.
https://forums.guru3d.com/data/avatars/m/204/204261.jpg
I dunno, I'm more excited with the 400 branch. I feel like something is gonna change.
https://forums.guru3d.com/data/avatars/m/255/255470.jpg
Wonder if RTX will have it's own driver set and not have to put these relics in with it? Be a much smaller download! (Kidding of course)
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
RavenMaster:

Might as well wait for the hotfix that will come out tomorrow 😛
Honestly I'd have to give them credit if it were out anytime soon. I give it a week.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Neo Cyrus:

Honestly I'd have to give them credit if it were out anytime soon. I give it a week.
yeah, unfortunately i think you're right. the Nvidia forums were full of side-stepping and responsibility avoiding.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Pimpiklem:

i find a tendency for nvidia users to stay silent about the problems they have. keep it under wraps and hide any problems to disguise their experience to give the impression all is well when in fact its a crazy mess.
I will sing and dance until the cows come home that getting a second GTX1080 was without a doubt the worst decision I have made in computing. Near nothing supports SLI now and if it does it has terrible scaling. I have tried quite a few games, Fallout 4 has horrendous scaling about 40% GPU usage and sinks to 40fps in Boston.... how is that even possible!? My cards are working fine they get over 13.3K in Time Spy. All the games that used to support SLI and get good scaling now simply either support it with terrible scaling or flat out don't support it. Every single Assassins Creed game supported it (AFAIK) yet Odyssey flatout does not have an SLI profile. I have tried Nvidia Nspector but that can just add more problems than its really worth. Rainbow Six Siege and GTAV have excellent performance though. By far the best I have tried. I may just sell these cards and get a 1080Ti, I'm stamping my foot down and I am 100% not jumping on the 2000 series bandwagon.
data/avatar/default/avatar29.webp
Multigpu is pretty much dead.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Paulo Narciso:

Multigpu is pretty much dead.
Might come back a bit with raytracing because it's scaling is near perfect - but yeah probably dead.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Denial:

Might come back a bit with raytracing because it's scaling is near perfect - but yeah probably dead.
Part of me still has high hopes for multigpu with Vulkun. DX12 has some support, Ryse Son Of Rome has great support for it and that used DX12 (I think). But Vulkun has yet to add the correct extensions, or libraries or what ever it is called. But with a company like ID basically flat out using Vulkun only now I have hopes that they can at least show its true potential. With DX12 and Vulkun they have a new feature that allows the game engines to see multiple GPU's as one entity (as one big GPU) this leads to the memory scaling up as well and not mirrored like it is now. This could seriously help games at higher resolutions as you are doubling up the memory capacity, and theoretically the bandwidth too. Maybe we will see this being used and maybe it could be the first thing to really use up that PCIE 3.0 bandwidth and require PCIE 4.0 or a new standard.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
CPC_RedDawn:

With DX12 and Vulkun they have a new feature that allows the game engines to see multiple GPU's as one entity (as one big GPU) this leads to the memory scaling up as well and not mirrored like it is now. This could seriously help games at higher resolutions as you are doubling up the memory capacity, and theoretically the bandwidth too. Maybe we will see this being used and maybe it could be the first thing to really use up that PCIE 3.0 bandwidth and require PCIE 4.0 or a new standard.
PCIE bandwidth is irrelevant now that these cards have NVLink but that doesn't matter because the latency on NVLink isn't anywhere near fast enough to do true memory pooling. There may be some instances where developers can utilize it to increase performance a bit, but it will never be a true doubling of memory capacity and/or bandwidth. Latency just isn't there - NVLink while significantly better than PCI-E is still operating at 30 microseconds per transaction, it would need to be sub 100ns for viable performance off a memory read from a different GPU. That and the GPU architecture itself would have to be designed in a way that the scheduler knows which GPU the memory is located on and takes that into consideration. In fact Tom Peterson from Nvidia just stated it again in a recent interview last night:
HH: Will NVLink combines frame buffers in game? (e.g. 2x 8GB GPU will be seen as 16GB GPU) TP: I think that sets a wrong expectation. That style of memory scaling will require app work. It's true you can set up the memory map to do that but it will be a terrible performance thing. The game will need to know there is latency to access that 2nd chunk of memory and it's not the same. While it's true this is a memory to memory link, I don't think of it as magically doubling the frame buffer. It's more nuanced than that today. It's going to take time for people to understand how people think of mGPU setup and maybe they will look at new techniques. NVLink is laying a foundation for future mGPU setup.
That being said, he did also state this:
HH: Is Microstuttering a thing of a past with NVLink? TP: NVLink will improve microstuttering but i don't think it will go away. It's more about the current technique of doing mGPU scaling where it's called AFR. It sends one frame to one GPU and another frame to another GPU. If there is a delay between the 2 GPUs, by the time you combine those frames together and send it to the user, there is a little bit of shift (i.e. Microstutter). A lot of it is how the AFR algorithm work to distribute work and combine them together. Turing has all the previous improvements with SLI (e.g. metering) but it won't go away entirely. More importantly, there are other mGPU technologies that's much more compelling in the future but we're not ready to talk about it today. Think about NVLink as a platform and investment that will be with us for a long time. It brings the 2 GPUs much closer with each other and allow them to communicate with each other without worrying about what's happening on PCIE
It seems like they do have some other mGPU tech in the pipeline that can maybe be a bit better than what we have.. but I still think it's going to be messy. There are too many shaders in modern games looking for previous/adjacent pixel data to accelerate single card setups and in mGPU systems that data is often on the other GPU. Unless they can get latency down to 100ns or better it just isn't worth trying to grab that data from the other GPU. In those cases mGPU is useless and everything ends up on 1 card, which is why the scaling goes to shit.
https://forums.guru3d.com/data/avatars/m/45/45709.jpg
CPC_RedDawn:

I will sing and dance until the cows come home that getting a second GTX1080 was without a doubt the worst decision I have made in computing. Near nothing supports SLI now and if it does it has terrible scaling. I have tried quite a few games, Fallout 4 has horrendous scaling about 40% GPU usage and sinks to 40fps in Boston.... how is that even possible!? My cards are working fine they get over 13.3K in Time Spy. All the games that used to support SLI and get good scaling now simply either support it with terrible scaling or flat out don't support it. Every single Assassins Creed game supported it (AFAIK) yet Odyssey flatout does not have an SLI profile. I have tried Nvidia Nspector but that can just add more problems than its really worth. Rainbow Six Siege and GTAV have excellent performance though. By far the best I have tried. I may just sell these cards and get a 1080Ti, I'm stamping my foot down and I am 100% not jumping on the 2000 series bandwagon.
Multi-card setup/concept (aka SLI/crossfire) is probably the biggest scam in a PC hardware history. Cunningly designed by you-know-which companies (gangs).
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
DLD:

Multi-card setup/concept (aka SLI/crossfire) is probably the biggest scam in a PC hardware history. Cunningly designed by you-know-which companies (gangs).
I have had a few multi GPU setups in my time and most have been great tbh. I had two HD7970's and they performed as expected. I also had two GTX970's and they performed as expected minus the 3.5GB issue but that effected single cards too. I also had the HD4870x2 single GPU with two HD4870's on it and this thing was the best multiGPU I have ever owned. Sure the drivers from ATI themselves was total dog shit and I was forced to use TwL's modified drivers which used to be right here on G3D forums. They worked wonders, I went from Battlefield Bad Company 2 at ultra settings (cant remember the res) running at like 50-60fps with a mix of 50-70% GPU usage on both cores, to TwL's drivers and getting 99% usage on both cores and around 90-120fps!! I have just kitted my whole PC out with all new noctua fans. Finally able to afford them with my current job, I got the new revision that just came out 5 of the 120's and 1 140 and the temp difference on the two GPU's in pretty insane. Dropped the top card by around 10-12C at full load which keeps clock speeds stable at around 2088MHz on each core. Most of my games work fine with these GTX1080's and Vulkun not long ago added support for AFR and multiGPU's so we should at least see some good performance in the new Doom game which I suspect will be 100% all Vulkun, and its also confirmed for new Metro Exodus too. I just got done with around 6 hours in The Witcher 3 and at Ultra 1440p with Hairworks Ultra x4MSAA and its rocking around 90-110fps with 90-99% usage.... It truly is a thing of beauty when they work like this together. Just that the hassles can be a pain in the ass.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Denial:

PCIE bandwidth is irrelevant now that these cards have NVLink but that doesn't matter because the latency on NVLink isn't anywhere near fast enough to do true memory pooling. There may be some instances where developers can utilize it to increase performance a bit, but it will never be a true doubling of memory capacity and/or bandwidth. Latency just isn't there - NVLink while significantly better than PCI-E is still operating at 30 microseconds per transaction, it would need to be sub 100ns for viable performance off a memory read from a different GPU. That and the GPU architecture itself would have to be designed in a way that the scheduler knows which GPU the memory is located on and takes that into consideration. In fact Tom Peterson from Nvidia just stated it again in a recent interview last night: That being said, he did also state this: It seems like they do have some other mGPU tech in the pipeline that can maybe be a bit better than what we have.. but I still think it's going to be messy. There are too many shaders in modern games looking for previous/adjacent pixel data to accelerate single card setups and in mGPU systems that data is often on the other GPU. Unless they can get latency down to 100ns or better it just isn't worth trying to grab that data from the other GPU. In those cases mGPU is useless and everything ends up on 1 card, which is why the scaling goes to crap.
Thanks for the explanation bro, I didn't know about the latency being such a big issue like that, wouldn't that require a new standard like PCIE4.0 with the wider bandwidth wouldn't that bring the latency down as I suspect PCIE4.0 will double PCIE3.0 bandwidth and currently nothing comes even close to saturating PCIE3.0 or is bandwidth not tied to latency?