Comprehensive comparison for AI, machine learning, and high-performance computing workloads.
Hopper Architecture
Hopper Architecture
| Specification | NVIDIA H100 PCIe | NVIDIA H100 SXM |
|---|---|---|
| Architecture | Hopper | Hopper |
| Release Year | 2022 | 2022 |
| VRAM | 80 GB | 80 GB |
| Memory Type | HBM3 | HBM3 |
| Memory Bandwidth | 2000 GB/s | 3350 GB/s+40.3% |
| FP32 Performance | 51 TFLOPS | 60 TFLOPS+15% |
| FP16 Performance | 102 TFLOPS | 120 TFLOPS+15% |
| INT8 Performance | 2040 TOPS | 2400 TOPS+15% |
| Tensor Cores | 14592 | 16896 |
| CUDA Cores | 14592 | 16896 |
| TDP | 350W | 700W |
| Form Factor | PCIe | SXM |
| NVLink Support | No | Yes |
| Avg. Price/Hour | $1.4 | $1.5+6.7% |
Single-precision floating-point performance for general compute workloads
NVIDIA H100 SXM is 15% faster
Half-precision performance optimized for deep learning training
NVIDIA H100 SXM is 15% faster
Integer performance for efficient model inference and deployment
NVIDIA H100 SXM is 15% faster
Data transfer speed between GPU and memory
NVIDIA H100 SXM is 40.3% faster
NVIDIA H100 SXM
NVIDIA H100 SXM
NVIDIA H100 PCIe
NVIDIA H100 PCIe
Enterprise-grade infrastructure
Get a custom quote in 24 hours for reserved GPU clusters with high-speed interconnect, any region, any GPU model, and any number of GPUs you need.
Any GPU
Choose your hardware
Any Quantity
Scale as needed
Any Region
Global availability
Interconnect
High-speed networking
Go from comparison to running workload in under 60 seconds. No complex setup required.
Only pay for what you use. Stop instances anytime. No hidden fees or long-term commitments.
Enterprise-grade infrastructure with 99.9% uptime. Trusted by AI teams worldwide.
Explore more GPU comparisons to find the perfect match for your workload