NVIDIA nForce 790i Ultra SLI review (eVGA)

Mainboards 328 Page 4 of 24 Published by

teaser

4 - PCIe 2.0 | CPU and GPU comm optimizations and more

PCI Express 2.0



I've mentioned it already a couple of times. Two of the three PCIe slots are based on the new 2.0 standard, bringing with it significant boosts to the PCIe interface. The total capacity of the PCI Express 2.0 bus will be 5Gbps, twice the old standard, and an x16 connector will now be able to achieve transfers as high as 16Gbps. It will be backwards compatible with PCIe 1.1 cards, making it simple for motherboard manufacturers to transition to it in the future. Various other improvements were also made, including dynamic link speed alteration and more interactive features of the bus in communicating with software. Cool stuff, yet it's so high-end that at this time you will just not notice a performance difference. The two latest G92 graphics cards already have 2.0 compatibility though.

New Enhanced CPU and GPU Communications

Two new features have been added that yours truly is actually pretty thrilled about. Name one situation where you are CPU limited and thought it would be nice to get some sort of optimization enabled, you feeling me already? Exactly, we discussed it recently in our SLI and 3-way SLI articles; in the lower resolutions in combination with multiple GPUs we always stumble into a decrease in framerate opposed to a "single" graphics card. See, if you have 2,3 or even 4 GPUs in your PC, then the CPU has to transmit much more data to your graphics card driver as these multiple GPUs need to be fed data. This is a little produces a bit of CPU overhead [Ed - CPU overhead is the amount of cpu usage a particular device uses, usually measured as a percentage of your total CPU]. The nForce 790i and 790i Ultra SLI MCPs pioneer a new communication protocol which reduces that overhead and optimizes bandwidth utilization for CPU-to-GPU and GPU-to-GPU data.

Two new features to be precise:

  • Direct GPU-to-GPU communication (PWShort)
  • Broadcast support

GPU-to-GPU Direct Link (PWShort)
Inside the nForce 790i and 790i Ultra SLI MCPs there is a PCIe Controller, this controller now has the ability to forward a message from a GPU directly to its target destination, a technology called posted-write shortcut (PWShort). This greatly helps to fight off data-latency in SLI mode. So if GPU A needs to transmit data towards GPU B, then here we have the first optimization. According to NVIDIA's spec sheet the GPUs need to send a smaller amount of updates to each other in order to keep the VGA memory synchronized. This is what we call a point-to-point approach that reduces traffic latency in-between the graphics processors utilized.
 

Broadcast
The second feature is partly new, I say partly because SLI technology was always already based on transmitting the same data to the multiple GPUs, but in a slightly different fashion. Compare this with a network HUB and a Switch. In a HUB data-packets are transmitted to all ports, in a switch the data-packets are transmitted to a specific port. Though a HUB is a pretty dumb way of getting data from point a to B, it's a very efficient one.

The same principle is handled in an environment with multiple GPUs. The processor (CPU) often has to send the same data to all of these GPUs (like the HUB) as in SLI all GPUs need to receive the same geometry, textures, and other data from the CPU. According to NVIDIA commands and data can now be broadcasted to all GPUs simultaneously, and let me quote them: "Instead of having to send the same data and commands to each GPU consecutively, only one data-segment is sent across the FSB to the chipset, which replicates it in parallel to all GPUs. This greatly reduces congestion across the FSB and reduces latencies for CPU-to-GPU messages".

The only thing that you should remember is that in theory the two new functions should be able to help fight off the CPU overhead we see in SLI gaming, as less CPU cycles are needed to achieve the same objective.

3-way SLI
Not a really big surprise anymore, but if you got a fortune to spend, NVIDIA recently launched it's 3-way SLI technology. Technology for the rich and famous right now as only the most expensive flagship products have a second SLI finger to enable three cards to render your games with some more punch. Eventually (I hope) it may support 3-way configurations of other GPUs as well. In the future you can even think of utilizing 2 graphics cards, and a third running Physics/CUDA (in case you missed out who bought Ageia ... -> NVIDIA).

You can check out our 3-way SLI preview here.

RAID Storage

Both nForce 780i & 790I offer you six native SATA 3GB/s drives. Quite an important number as this mainboard is RAID compatible (0,1, 0+1, 5). As you can see you can make a dual RAID 5 solution as RAID 5 requires a minimum of three drives. RAID 5 is the version most often recommended as it takes a number of disks, stripes them together, and puts a parity section on each disk. Therefore if one drive went down, the information is still stored on another disk using parity.

We actually use this RAID configuration on some of the Guru3D.com servers

The RAID unit/SATA connections are all hot swappable, meaning you can pull out or insert the plug when the system is powered on. The minute a driver fails or will get unplugged the NVIDIA MediaShield sentinel will pop up and show you what drive/connection has gone bad. The image shows the hard drive connector ports and provides a visual indication of the location and status of the drives as follows:

  • Red rectanglered denotes a failed drive.
  • Green rectanglegreen denotes a healthy hard drive.
  • Yellow rectangleyellow denotes a member of a failed RAID array, but is not the cause of the failure.
  • No color rectangleunconnected ports have no visual indication.

Networking

Just like the 680i and 780i boards the 790 series have some noteworthy Ethernet functions. The 790 mainboards have two Gbit/s NICs harbored in the MCP under the name DualNet

What's pretty cool is that you can bridge the two NICs together, NVIDIA calls this Teaming. Both NICs will work as one big 2 GBit/s controller and it actually works pretty darn good. I never understood why NVIDIA did not market this as SLI Ethernet :) Configuring this is real easy, you simply hook up both connections to a switch and in the NVIDIA CP (Control panel) enable Network Teaming, which binds the two NIC's as one.

In the past we did manage a test of this: the main (and thus "teamed") PC was functioning as a server, and from 6 other clients, massive amounts of data were sent bi-directionally. Extremely impressive as the network speed indeed neared 2GBit/sec. Realistically however in a SOHO environment you will probably never ever have a situation at home where you need 2 gigabit of full-duplex bandwidth at your disposal. There however is another positive to teaming as it also provides network redundancy through fail-over capability. In short, if one line dies because your little brother found out the usage of an axe, the second NIC is taking over without "downtime" of any process happening at your PC. It's actually quite a cool server feature when you think about it. Or hey, if you want to just create a separate VLAN inside the network.

Last feature I like to bring up is FirstPacket. When you are playing on-line / or in LAN or hey what about VOIP ? Don't you hate it that you, for example, can't have any other outbound traffic from, for example, an FTP or Bittorrent client ? Doesn't your ping time go AWOL ? That outbound traffic can kill off your ping times making it barely impossible to play on-line very well.  At the NVIDIA control panel you can prioritize your network traffic. So if you set a game like Crysis with higher priority your ping times will remain good as all other traffic will be lower in the data packet queue. It does sound a lot like internal QoS though.

HD Audio

The nForce 790 series come with integrated audio. And you know what, it's not even half bad. Similar to the nForce 500 lineup, the nForce 600 and now 700 series, will offer full support for the various "Azalia" based High Definition Audio codecs (ALC888S). While the choice of which HDA codec along with the associated circuitry can still greatly impact audio quality and performance as it's not exactly an X-Fi or Auzentech X-Meridian. The Azalia HD codec's are still lot's better than the AC-97 solutions as offered in the past. Connectivity wise you'll get 7.1 analog outputs and more interestingly, one optical TOSLINK.

Share this content
Twitter Facebook Reddit WhatsApp Email Print