| Category | Detail |
|---|---|
| GPU Name | GA100 |
| Architecture | Ampere |
| Process Size | 7 nm |
| Transistors | 54,200 million |
| Release Date | Nov 16th, 2020 |
| Base Clock | 1275 MHz |
| Boost Clock | 1410 MHz |
| Memory Size | 80 GB |
| Memory Type | HBM2e |
| Bandwidth | 2.04 TB/s |
| Tensor Cores | 432 |
| FP64 | 9.7 TFLOPS |
| FP64 Tensor Core | 19.5 TFLOPS |
| FP32 | 19.5 TFLOPS |
| Tensor Float 32 (TF32) | 156 TFLOPS | 312 TFLOPS* |
| BFLOAT16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
| FP16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
| INT8 Tensor Core | 624 TOPS | 1248 TOPS* |
| GPU Memory | 80GB HBM2e |
| GPU Memory Bandwidth | 2,039 GB/s |
| Max Thermal Design Power (TDP) | 400W *** |
| Multi-Instance GPU | Up to 7 MIGs @ 10GB |
| Form Factor | SXM |
| Interconnect | NVLink: 600 GB/s PCIe Gen4: 64 GB/s |
| Server Options | NVIDIA HGX™ A100-Partner and NVIDIA-Certified Systems with 4,8, or 16 GPUs NVIDIA DGX™ A100 with 8 GPUs |