
THE UNIVERSAL SYSTEM FOR AI INFRASTRUCTURE
The enterprise AI infrastructure that improves upon traditional approaches. Enterprises, developers, data scientists, and researchers need a new platform that unifies all AI workloads, simplifying infrastructure and accelerating ROI.
NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system.
Available with up to 640 gigabytes (GB) of total GPU memory, which increases performance in large-scale training jobs up to 3X and doubles the size of MIG instances, DGX A100 can tackle the largest and most complex jobs, along with the simplest and smallest.
NVIDIA DGX A100 is more than a server. It’s a complete hardware and software platform built upon the knowledge gained from the world’s largest DGX proving ground—NVIDIA DGX SATURNV—and backed by thousands of DGXperts at NVIDIA.
DGX A100 miracles:
SYSTEM SPECIFICATIONS |
|
|
|
NVIDIA DGX A100 |
NVIDIA DGX A100 |
GPUs |
8x NVIDIA A100 |
8x NVIDIA A100 |
GPU Memory |
640 GB total |
320 GB total |
Performance |
5 petaFLOPS AI |
|
CPU |
Dual AMD Rome 7742, 128 cores total |
|
System Memory |
2 TB |
1 TB |
Storage |
30 TB (8x 3.84 TB) |
15 TB (4x 3.84 TB) |