NVIDIA adds 80GB graphics memory version to workstation GPU A100

Published by

teaser

This one slipped through the mazes yesterday, but NVIDIA added to the GPU "A100 PCIe" a powerful 80GB video memory model for PCI-Express connected workstations.



The "A100 80GB PCIe," which doubles the video memory capacity to the HBM2e 80GB, has been added to the "A100 PCIe," a PCI-Express connection GPU for data centers that uses the Ampere architecture. With "A100 80GB PCIe", the memory bandwidth has also been expanded by about 25% to achieve 1,935GB / sec. The large-capacity, wide-band video memory allows more data and larger neural networks to be stored in memory, minimizing node-to-node communication and energy consumption.

Computational performance is unchanged from the conventional model, FP64 is 9.7 TFLOPS, FP64 Tensor Core is 19.5 TFLOPS, FP32 is 19.5 TFLOPS, Tensor Float 32 is 156 TFLOPS / 312 TFLOPS (sparse matrix), Bfloat 16 Tensor Core is 312 TFLOPS / 624 TFLOPS (Sparse matrix), FP16 Tensor Core is 312 TFLOPS / 624 TFLOPS (Sparse matrix), INT8 Tensor Core is 624 TFLOPS / 1,248 TFLOPS (Sparse matrix).


Share this content
Twitter Facebook Reddit WhatsApp Email Print