Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
Samsung T7 Shield Portable 1TB USB SSD review
DeepCool LS720 (LCS) review
Fractal Design Pop Air RGB Black TG review
Palit GeForce GTX 1630 4GB Dual review
FSP Dagger Pro (850W PSU) review
Razer Leviathan V2 gaming soundbar review
Guru3D NVMe Thermal Test - the heatsink vs. performance
EnGenius ECW220S 2x2 Cloud Access Point review
Alphacool Eisbaer Aurora HPE 360 LCS cooler review
Noctua NH-D12L CPU Cooler Review

New Downloads
Prime95 download version 30.9 build 1
Intel ARC graphics Driver Download Version: 30.0.101.1743
AMD Radeon Software Adrenalin 22.6.1 WHQL driver download
GeForce 516.59 WHQL driver download
Media Player Classic - Home Cinema v1.9.22 Download
AMD Chipset Drivers Download v4.06.10.651
CrystalDiskInfo 8.17 Download
AMD Radeon Software Adrenalin 22.6.1 Windows 7 driver download
ReShade download v5.2.2
HWiNFO Download v7.26


New Forum Topics
MSI G274QRFW is a White WQHD gaming LCD with Rapid IPS panel Chatter: NVIDIA RTX 4090, RTX 4080 and RTX 4070 > 16384 cores, 24 GB GDDR6X, up to 2.75 GHz and 450W TGP Info Zone - gEngines, Ray Tracing, DLSS, DLAA, TSR, FSR, XeSS, DLDSR etc. ASRock X670E Pro RS Motherboard Product Page Is Live MSI AB / RTSS development news thread ViewSonic Broadens Line with New OMNI Curved Gaming Monitor FSR Thread Running 4 external monitors off my Dell Laptop Inspiron [3rd-Party Driver] Amernime Zone Radeon Insight 22.5.1 WHQL Driver Pack (Released) Download: MSI AfterBurner 4.4.2 Final/Stable




Guru3D.com » Downloads » GeForce 399.07 WHQL driver download

GeForce 399.07 WHQL driver download

Posted by: Hilbert Hagedoorn on: 08/27/2018 04:41 PM [ 21 comment(s) ]

Advertisement


Download the Nvidia GeForce 399.07 WHQL driver as released by NVIDIA. The drivers have optimizations for Battlefield V Open Beta, F1 2018, Immortal: Unchained, Pro Evolution Soccer 2019, Strange Brigade, and Switchblade.

Game Ready Drivers provide the best possible gaming experience for all major new releases, including Virtual Reality games. Prior to a new title launching, our driver team is working up until the last minute to ensure every performance tweak and bug fix is included for the best gameplay on day-1. Please note: Effective April 2018, Game Ready Driver upgrades, including performance enhancements, new features, and bug fixes, will be available only on Kepler, Maxwell, Pascal and Volta series GPUs. Critical security updates will be available on Fermi series GPUs through January 2019. We have a discussion thread open on this driver here in our Nvidia driver discussion forums.

Game Ready Provides the optimal gaming experience for Battlefield V Open Beta, F1 2018, Immortal: Unchained, Pro Evolution Soccer 2019, Strange Brigade, and Switchblade.

Application SLI Profiles Added or updated:

  • F1 2018
  • Immortal: Unchained


3D Vision Profiles:

  • F1 2018 - Good
  • Strange Brigade - Not recommended
     

Supported cards:

NVIDIA TITAN Series
NVIDIA TITAN V, NVIDIA TITAN Xp, NVIDIA TITAN X (Pascal), GeForce GTX TITAN, GeForce GTX TITAN X, GeForce GTX TITAN Black, GeForce GTX TITAN Z

GeForce 10 Series
GeForce GTX 1080 Ti, GeForce GTX 1080, GeForce GTX 1070 Ti, GeForce GTX 1070, GeForce GTX 1060, GeForce GTX 1050 Ti, GeForce GTX 1050, GeForce GT 1030

GeForce 900 Series
GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960, GeForce GTX 950

GeForce 700 Series
GeForce GTX 780 Ti, GeForce GTX 780, GeForce GTX 770, GeForce GTX 760, GeForce GTX 760 Ti (OEM), GeForce GTX 750 Ti, GeForce GTX 750, GeForce GTX 745, GeForce GT 740, GeForce GT 730, GeForce GT 720, GeForce GT 710, GeForce GT 705

GeForce 600 Series
GeForce GTX 690, GeForce GTX 680, GeForce GTX 670, GeForce GTX 660 Ti, GeForce GTX 660, GeForce GTX 650 Ti BOOST, GeForce GTX 650 Ti, GeForce GTX 650, GeForce GTX 645, GeForce GT 645, GeForce GT 640, GeForce GT 635, GeForce GT 630, GeForce GT 620, GeForce GT 610, GeForce 605

 

 
Have you read our ASUS ROG SWIFT PG27UQ GSYNC UHD HDR Monitor review already?

 






Download Locations

  • Download Windows 10 64-bit (Desktop) [ 19922 downloads ]
  • Download Windows 7 / 8.1 64-bit (Desktop) [ 6382 downloads ]
  • Download Windows 10 64-bit (Notebook) [ 5645 downloads ]
  • Download Windows 7 / 8.1 64-bit (Notebook) [ 3614 downloads ]
  • Download release notes [ 3373 downloads ]
  • Rate this file
    Rating:

    « AMD Radeon Adrenalin Edition 18.8.2 Driver download · GeForce 399.07 WHQL driver download · GeForce 399.24 WHQL driver download »

    5 pages 1 2 3 4 5

    CPC_RedDawn
    Senior Member
    Posts: 9356
    Joined: 2008-01-06

    #5579473 Posted on: 08/30/2018 08:51 AM
    Might come back a bit with raytracing because it's scaling is near perfect - but yeah probably dead.


    Part of me still has high hopes for multigpu with Vulkun. DX12 has some support, Ryse Son Of Rome has great support for it and that used DX12 (I think). But Vulkun has yet to add the correct extensions, or libraries or what ever it is called. But with a company like ID basically flat out using Vulkun only now I have hopes that they can at least show its true potential. With DX12 and Vulkun they have a new feature that allows the game engines to see multiple GPU's as one entity (as one big GPU) this leads to the memory scaling up as well and not mirrored like it is now. This could seriously help games at higher resolutions as you are doubling up the memory capacity, and theoretically the bandwidth too. Maybe we will see this being used and maybe it could be the first thing to really use up that PCIE 3.0 bandwidth and require PCIE 4.0 or a new standard.

    Denial
    Senior Member
    Posts: 13757
    Joined: 2004-05-16

    #5579563 Posted on: 08/30/2018 01:31 PM
    With DX12 and Vulkun they have a new feature that allows the game engines to see multiple GPU's as one entity (as one big GPU) this leads to the memory scaling up as well and not mirrored like it is now. This could seriously help games at higher resolutions as you are doubling up the memory capacity, and theoretically the bandwidth too. Maybe we will see this being used and maybe it could be the first thing to really use up that PCIE 3.0 bandwidth and require PCIE 4.0 or a new standard.


    PCIE bandwidth is irrelevant now that these cards have NVLink but that doesn't matter because the latency on NVLink isn't anywhere near fast enough to do true memory pooling. There may be some instances where developers can utilize it to increase performance a bit, but it will never be a true doubling of memory capacity and/or bandwidth. Latency just isn't there - NVLink while significantly better than PCI-E is still operating at 30 microseconds per transaction, it would need to be sub 100ns for viable performance off a memory read from a different GPU. That and the GPU architecture itself would have to be designed in a way that the scheduler knows which GPU the memory is located on and takes that into consideration.

    In fact Tom Peterson from Nvidia just stated it again in a recent interview last night:

    HH: Will NVLink combines frame buffers in game? (e.g. 2x 8GB GPU will be seen as 16GB GPU)

    TP: I think that sets a wrong expectation. That style of memory scaling will require app work. It's true you can set up the memory map to do that but it will be a terrible performance thing. The game will need to know there is latency to access that 2nd chunk of memory and it's not the same. While it's true this is a memory to memory link, I don't think of it as magically doubling the frame buffer. It's more nuanced than that today. It's going to take time for people to understand how people think of mGPU setup and maybe they will look at new techniques. NVLink is laying a foundation for future mGPU setup.



    That being said, he did also state this:

    HH: Is Microstuttering a thing of a past with NVLink?

    TP: NVLink will improve microstuttering but i don't think it will go away. It's more about the current technique of doing mGPU scaling where it's called AFR. It sends one frame to one GPU and another frame to another GPU. If there is a delay between the 2 GPUs, by the time you combine those frames together and send it to the user, there is a little bit of shift (i.e. Microstutter). A lot of it is how the AFR algorithm work to distribute work and combine them together. Turing has all the previous improvements with SLI (e.g. metering) but it won't go away entirely. More importantly, there are other mGPU technologies that's much more compelling in the future but we're not ready to talk about it today. Think about NVLink as a platform and investment that will be with us for a long time. It brings the 2 GPUs much closer with each other and allow them to communicate with each other without worrying about what's happening on PCIE


    It seems like they do have some other mGPU tech in the pipeline that can maybe be a bit better than what we have.. but I still think it's going to be messy. There are too many shaders in modern games looking for previous/adjacent pixel data to accelerate single card setups and in mGPU systems that data is often on the other GPU. Unless they can get latency down to 100ns or better it just isn't worth trying to grab that data from the other GPU. In those cases mGPU is useless and everything ends up on 1 card, which is why the scaling goes to shit.

    DLD
    Senior Member
    Posts: 887
    Joined: 2002-09-14

    #5580411 Posted on: 09/01/2018 10:44 PM
    I will sing and dance until the cows come home that getting a second GTX1080 was without a doubt the worst decision I have made in computing. Near nothing supports SLI now and if it does it has terrible scaling. I have tried quite a few games, Fallout 4 has horrendous scaling about 40% GPU usage and sinks to 40fps in Boston.... how is that even possible!? My cards are working fine they get over 13.3K in Time Spy. All the games that used to support SLI and get good scaling now simply either support it with terrible scaling or flat out don't support it. Every single Assassins Creed game supported it (AFAIK) yet Odyssey flatout does not have an SLI profile.

    I have tried Nvidia Nspector but that can just add more problems than its really worth.

    Rainbow Six Siege and GTAV have excellent performance though. By far the best I have tried. I may just sell these cards and get a 1080Ti, I'm stamping my foot down and I am 100% not jumping on the 2000 series bandwagon.

    Multi-card setup/concept (aka SLI/crossfire) is probably the biggest scam in a PC hardware history. Cunningly designed by you-know-which companies (gangs).

    CPC_RedDawn
    Senior Member
    Posts: 9356
    Joined: 2008-01-06

    #5581606 Posted on: 09/05/2018 06:06 PM
    Multi-card setup/concept (aka SLI/crossfire) is probably the biggest scam in a PC hardware history. Cunningly designed by you-know-which companies (gangs).


    I have had a few multi GPU setups in my time and most have been great tbh.

    I had two HD7970's and they performed as expected. I also had two GTX970's and they performed as expected minus the 3.5GB issue but that effected single cards too.

    I also had the HD4870x2 single GPU with two HD4870's on it and this thing was the best multiGPU I have ever owned. Sure the drivers from ATI themselves was total dog shit and I was forced to use TwL's modified drivers which used to be right here on G3D forums. They worked wonders, I went from Battlefield Bad Company 2 at ultra settings (cant remember the res) running at like 50-60fps with a mix of 50-70% GPU usage on both cores, to TwL's drivers and getting 99% usage on both cores and around 90-120fps!!

    I have just kitted my whole PC out with all new noctua fans. Finally able to afford them with my current job, I got the new revision that just came out 5 of the 120's and 1 140 and the temp difference on the two GPU's in pretty insane. Dropped the top card by around 10-12C at full load which keeps clock speeds stable at around 2088MHz on each core.

    Most of my games work fine with these GTX1080's and Vulkun not long ago added support for AFR and multiGPU's so we should at least see some good performance in the new Doom game which I suspect will be 100% all Vulkun, and its also confirmed for new Metro Exodus too.

    I just got done with around 6 hours in The Witcher 3 and at Ultra 1440p with Hairworks Ultra x4MSAA and its rocking around 90-110fps with 90-99% usage.... It truly is a thing of beauty when they work like this together.

    Just that the hassles can be a pain in the ass.

    CPC_RedDawn
    Senior Member
    Posts: 9356
    Joined: 2008-01-06

    #5581609 Posted on: 09/05/2018 06:10 PM
    PCIE bandwidth is irrelevant now that these cards have NVLink but that doesn't matter because the latency on NVLink isn't anywhere near fast enough to do true memory pooling. There may be some instances where developers can utilize it to increase performance a bit, but it will never be a true doubling of memory capacity and/or bandwidth. Latency just isn't there - NVLink while significantly better than PCI-E is still operating at 30 microseconds per transaction, it would need to be sub 100ns for viable performance off a memory read from a different GPU. That and the GPU architecture itself would have to be designed in a way that the scheduler knows which GPU the memory is located on and takes that into consideration.

    In fact Tom Peterson from Nvidia just stated it again in a recent interview last night:



    That being said, he did also state this:



    It seems like they do have some other mGPU tech in the pipeline that can maybe be a bit better than what we have.. but I still think it's going to be messy. There are too many shaders in modern games looking for previous/adjacent pixel data to accelerate single card setups and in mGPU systems that data is often on the other GPU. Unless they can get latency down to 100ns or better it just isn't worth trying to grab that data from the other GPU. In those cases mGPU is useless and everything ends up on 1 card, which is why the scaling goes to crap.

    Thanks for the explanation bro, I didn't know about the latency being such a big issue like that, wouldn't that require a new standard like PCIE4.0 with the wider bandwidth wouldn't that bring the latency down as I suspect PCIE4.0 will double PCIE3.0 bandwidth and currently nothing comes even close to saturating PCIE3.0 or is bandwidth not tied to latency?

    5 pages 1 2 3 4 5

    Post New Comment

    Click here to post a comment for this file on the message forum.


    Guru3D.com © 2022