NVIDIA Announces Volta Powered SaturnV AI Supercomputer

Published by

teaser

NVIDIA’s new DGX SATURNV supercomputer is ranked the world’s most efficient and 28th fastest overall on the Top500 list of supercomputers released Monday.



The SATURNV supercomputer, powered by new Tesla P100 GPUs, delivers 9.46 gigaflops/watt — a 42 percent improvement from the 6.67 gigaflops/watt delivered by the most efficient machine on the Top500 list released just last June. Compared with a supercomputer of similar performance, the Camphore 2 system, which is powered by Xeon Phi Knights Landing, SATURNV is 2.3x more energy efficient. You can pick a unit up for just US$129K :-)

Nvidia -- That efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.

GPUs — with their massively parallel architecture — have long powered some of the world’s fastest supercomputers. More recently, they’ve been key to an AI boom that’s given us machines that perceive the world as we do, understand our language and learn from examples in ways that exceed our own (see “Accelerating AI with GPUs: A New Computing Model“).We’re convinced AI can give every company a competitive advantage. That’s why we’ve assembled the world’s most efficient — and one of the most powerful — supercomputers to aid us in our own work.

Assembled by a team of a dozen engineers using 124 DGX-1s — the AI supercomputer in a box we unveiled in April — SATURNV helps us build the autonomous driving software that’s a key part of our NVIDIA DRIVE PX 2 self-driving vehicle platform.

We’re also training neural networks to understand chipset design and very-large-scale-integration, so our engineers can work more quickly and efficiently. Yes, we’re using GPUs to help us design GPUs.

Most importantly, SATURNV’s power will give us the ability to train — and design — new deep learning networks quickly.


Share this content
Twitter Facebook Reddit WhatsApp Email Print