NVIDIA A100 PCIevsNVIDIA H200
Detailed specifications, performance benchmarks, and pricing comparison to help you choose the right GPU for your AI workloads.
NVIDIA A100 PCIe
Ampere · 2020
NVIDIA H200
Hopper · 2023
Specifications · Comparison
Side by side specs.
| Spec | NVIDIA A100 PCIe | NVIDIA H200 |
|---|---|---|
| Architecture | Ampere | Hopper |
| VRAM | 80 GB | 141 GB+43.3% |
| Memory Type | HBM2e | HBM3e |
| Memory Bandwidth | 1935 GB/s | 4800 GB/s+59.7% |
| FP32 Performance | 19.5 TFLOPS | 67 TFLOPS+70.9% |
| FP16 Performance | 78 TFLOPS | 134 TFLOPS+41.8% |
| INT8 Performance | 624 TOPS | 2680 TOPS+76.7% |
| TDP | 300W | 700W |
| Form Factor | PCIe | SXM |
| Price (avg/hr) | $0.98 | $2.25 |
Performance · Analysis
Performance breakdown.
Compute (FP32)
Raw single-precision floating point throughput
NVIDIA H200 is 70.9% faster
Training (FP16)
Half-precision performance for deep learning training
NVIDIA H200 is 41.8% faster
Inference (INT8)
Integer performance for model inference workloads
NVIDIA H200 is 76.7% faster
Memory Bandwidth
Data transfer rate between memory and compute units
NVIDIA H200 is 59.7% faster
Best Compute
NVIDIA H200
Most Memory
NVIDIA H200
Best Training
NVIDIA H200
Best Value
NVIDIA A100 PCIe
Pricing · Cost
Cost comparison.
Hourly
Save $1.27 with NVIDIA A100 PCIe
Daily
Save $30.48 with NVIDIA A100 PCIe
Monthly
Save $914.40 with NVIDIA A100 PCIe
Use Cases · Workloads
Best for your workload.
NVIDIA A100 PCIe
NVIDIA H200
Platform · Benefits
Why Runcrate.
Instant Deployment
Get your GPU instance running in minutes with pre-configured AI environments. No setup complexity.
Pay Per Hour
Only pay for the compute you actually use. Prepaid credits with transparent, per-hour billing.
Reliable Infrastructure
Enterprise-grade reliability with automatic failover and data persistence across sessions.
Related · Comparisons
Compare other GPUs.
FAQ · Questions
Common questions.
Deploy NVIDIA A100 PCIe or NVIDIA H200
Get started with GPU cloud computing in minutes. No setup complexity, no long-term commitments.