NVIDIA Launches NVLINK - High-Speed GPU Interconnect
NVIDIA announced that it plans to integrate a high-speed interconnect, called NVIDIA NVLink, into its future GPUs, enabling GPUs and CPUs to share data five to 12 times faster than they can today. This will eliminate a longstanding bottleneck and help pave the way for a new generation of exascale supercomputers that are 50-100 times faster than today's most powerful systems.
NVIDIA will add NVLink technology into its Pascal GPU architecture -- expected to be introduced in 2016 -- following this year's new NVIDIA Maxwell compute architecture. The new interconnect was co-developed with IBM, which is incorporating it in future versions of its POWER CPUs.
"NVLink technology unlocks the GPU's full potential by dramatically improving data movement between the CPU and GPU, minimizing the time that the GPU has to wait for data to be processed," said Brian Kelleher, senior vice president of GPU Engineering at NVIDIA.
"NVLink enables fast data exchange between CPU and GPU, thereby improving data throughput through the computing system and overcoming a key bottleneck for accelerated computing today," said Bradley McCredie, vice president and IBM Fellow at IBM. "NVLink makes it easier for developers to modify high-performance and data analytics applications to take advantage of accelerated CPU-GPU systems. We think this technology represents another significant contribution to our OpenPOWER ecosystem."
With NVLink technology tightly coupling IBM POWER CPUs with NVIDIA Tesla® GPUs, the POWER data center ecosystem will be able to fully leverage GPU acceleration for a diverse set of applications, such as high performance computing, data analytics and machine learning.
Advantages Over PCI Express 3.0
Today's GPUs are connected to x86-based CPUs through the PCI Express (PCIe) interface, which limits the GPU's ability to access the CPU memory system and is four- to five-times slower than typical CPU memory systems. PCIe is an even greater bottleneck between the GPU and IBM POWER CPUs, which have more bandwidth than x86 CPUs. As the NVLink interface will match the bandwidth of typical CPU memory systems, it will enable GPUs to access CPU memory at its full bandwidth.
This high-bandwidth interconnect will dramatically improve accelerated software application performance. Because of memory system differences -- GPUs have fast but small memories, and CPUs have large but slow memories -- accelerated computing applications typically move data from the network or disk storage to CPU memory, and then copy the data to GPU memory before it can be crunched by the GPU. With NVLink, the data moves between the CPU memory and GPU memory at much faster speeds, making GPU-accelerated applicationsrun much faster.
Unified Memory Feature
Faster data movement, coupled with another feature known as Unified Memory, will simplify GPU accelerator programming. Unified Memory allows the programmer to treat the CPU and GPU memories as one block of memory. The programmer can operate on the data without worrying about whether it resides in the CPU's or GPU's memory.
Although future NVIDIA GPUs will continue to support PCIe, NVLink technology will be used for connecting GPUs to NVLink-enabled CPUs as well as providing high-bandwidth connections directly between multiple GPUs. Also, despite its very high bandwidth, NVLink is substantially more energy efficient per bit transferred than PCIe.
NVIDIA has designed a module to house GPUs based on the Pascal architecture with NVLink. This new GPU module is one-third the size of the standard PCIe boards used for GPUs today. Connectors at the bottom of the Pascal module enable it to be plugged into the motherboard, improving system design and signal integrity.
NVLink high-speed interconnect will enable the tightly coupled systems that present a path to highly energy-efficient and scalable exascale supercomputers, running at 1,000 petaflops (1 x 1018 floating point operations per second), or 50 to 100 times faster than today's fastest systems.
NVIDIA launches Tesla K40 with 12GB GDDR5 - 11/20/2013 09:04 AM
NVIDIA today unveiled the Tesla K40 GPU accelerator, the world's highest performance accelerator ever built, delivering extreme performance to a widening range of scientific, engineering, high perfor...
Nvidia launches Splinter Cell GTX bundle - 07/09/2013 07:29 PM
Nvidia has exciting news today (hey free games = good!) as they are announcing a new bundle today for Splinter Cell Blacklist. With AMD being very much on top of any game these days, it's nice to se...
NVIDIA launches new NVIDIA Quadro professional graphics products - 03/06/2013 09:17 AM
NVIDIA launches a range of NVIDIA Quadro professional graphics products that offer unprecedented workstation performance and capabilities for professionals in manufacturing, engineering, medical, ar...
NVIDIA launches Gear Up Game bundle - 02/11/2013 11:03 AM
NVIDIA is going to offer a new bundle with Free-to-play games. Everyone who purchases a GeForce GTX card (GTX 650 or above), will receive up to 110 EUR of in-game value to use with three game title...
NVIDIA lowers Tesla specs for servers - 05/07/2010 11:16 AM
The Inquirer reports NVIDIA's new Fermi-based Tesla cards run at lower clockspeeds than the desktop counterparts to reduce heat output. It seems those engineering feats were helped, in a manner of spe...
Senior Member
Posts: 167
Joined: 2013-10-13
NV just means Nvidia
Also, I'm fairly certain this is mostly intended for large datacenter / supercomputer applications. Not so much PCs.
Senior Member
Posts: 6361
Joined: 2005-02-25
Considering its called NVlink, without any definition as to what NV means, it's easy to assume that this is proprietary. What confuses me is if this actually has intended use with x86 - IBM doesn't support x86 and I haven't heard any sources saying intel or AMD will support it.
But what I don't get is why this is necessary. As far as I'm aware, we haven't even saturated PCIe 2.0 yet (at least with modern single-GPU cards). I could understand this fixing latency issues, but I never got the impression latency was all that bad. PCIe should have plenty of bandwidth.
I'm not against the idea of having a successor to AGP (PCIe is more of a successor to PCI), but I don't see nvidia getting very far with this in the PC market if they can't at least get intel on board with this.
I hope both AMD and intel support it, because this (and DDR4 memory) make a good milestone for me to do a complete system upgrade. I only do full upgrades whenever there's a major system-wide change.
NvLink = CAPI from IBM ( both Nvidia and IBM have developp it, even if they call it differently ), this is basically the Hypertransport Link from AMD or QPI for Intel.. Anyway AMD have a different system for interconnect the systems ( with Supermicro professional system ).
It can be used anyway for do the connection GPU to GPU, but i dont think it will be used this way for SLI ( too costly and useless )
This technology effectively dont look to be made for desktop GPU, this is for supercomputer or HPC professional system. you have a cable "mezzanine" who rely the motherboard of one system to the other motherboard for transfer faster the information between a cpu to cpu, or gpu to gpu in the whole system ...
I dont know what to think about it, comparable system is allready existing and if IBM was allready developp it, this mean all other actors of supercomputers, HPC are doing the same if it is not allready the case.
Senior Member
Posts: 917
Joined: 2006-04-13
Nice one but still this is NOT FOR DESKTOP PC and NvLink = CAPI is a little to much i think NvLink is way better !
Senior Member
Posts: 6361
Joined: 2005-02-25
Nvidia and IBM have team up for develop it as mentionned during the conference by Nvidia, IBM will still call it CAPi ( even if in 2016 this will be CAPi "2.0" or who know what version it will be )...
Senior Member
Posts: 17906
Joined: 2012-05-18
Probably something like nvidia g-sync style