AMD MI300XvsNVIDIA H100 SXM
Detailed specifications, performance benchmarks, and pricing comparison to help you choose the right GPU for your AI workloads.

AMD MI300X
CDNA 3 · 2023
NVIDIA H100 SXM
Hopper · 2022
Specifications · Comparison
Side by side specs.
| Spec | AMD MI300X | NVIDIA H100 SXM |
|---|---|---|
| Architecture | CDNA 3 | Hopper |
| VRAM | 192 GB+140.0% | 80 GB |
| Memory Type | HBM3 | HBM3 |
| Memory Bandwidth | 5300 GB/s+58.2% | 3350 GB/s |
| FP32 Performance | 163 TFLOPS+171.7% | 60 TFLOPS |
| FP16 Performance | 326 TFLOPS+171.7% | 120 TFLOPS |
| INT8 Performance | 2610 TOPS+8.8% | 2400 TOPS |
| TDP | 750W | 700W |
| Form Factor | OAM | SXM |
| Price (avg/hr) | $2.50 | $1.50 |
Performance · Analysis
Performance breakdown.
Compute (FP32)
Raw single-precision floating point throughput
AMD MI300X is 171.7% faster
Training (FP16)
Half-precision performance for deep learning training
AMD MI300X is 171.7% faster
Inference (INT8)
Integer performance for model inference workloads
AMD MI300X is 8.8% faster
Memory Bandwidth
Data transfer rate between memory and compute units
AMD MI300X is 58.2% faster
Best Compute
AMD MI300X
Most Memory
AMD MI300X
Best Training
AMD MI300X
Best Value
NVIDIA H100 SXM
Pricing · Cost
Cost comparison.
Hourly
Save $1.00 with NVIDIA H100 SXM
Daily
Save $24.00 with NVIDIA H100 SXM
Monthly
Save $720.00 with NVIDIA H100 SXM
Use Cases · Workloads
Best for your workload.

AMD MI300X
NVIDIA H100 SXM
Platform · Benefits
Why Runcrate.
Instant Deployment
Get your GPU instance running in minutes with pre-configured AI environments. No setup complexity.
Pay Per Hour
Only pay for the compute you actually use. Prepaid credits with transparent, per-hour billing.
Reliable Infrastructure
Enterprise-grade reliability with automatic failover and data persistence across sessions.
Related · Comparisons
Compare other GPUs.
FAQ · Questions
Common questions.
Deploy AMD MI300X or NVIDIA H100 SXM
Get started with GPU cloud computing in minutes. No setup complexity, no long-term commitments.