The NVIDIA H200 NVL, based on the Hopper architecture, is NVIDIA's premium AI accelerator for large language model workloads. Released in late 2023, it features 141GB of HBM3e memory with 4.8 TB/s memory bandwidth. Its specialized Transformer Engine cores deliver 1,979 TFLOPS of FP16 compute power, making it the preferred solution for enterprise AI training and inference in data centers worldwide.
You can rent the H200 by the hour with prices ranging from $2.35/hr to $3.53/hr. Try the H200 now.
Category | Detail |
---|---|
GPU Name | GH100 |
Architecture | Hopper |
Process Size | 5 nm |
Transistors | 80,000 million |
Release Date | Nov 18th, 2024 |
Base Clock | 1365 MHz |
Boost Clock | 1785 MHz |
Memory Size | 141 GB |
Memory Type | HBM3e |
Bandwidth | 3.36 TB/s |
Tensor Cores | 528 |
FP16 (half) | 241.3 TFLOPS (4:1) |
FP32 (float) | 60.32 TFLOPS |