The world's most deployed AI GPU. 141GB HBM3e, 989 TFLOPS FP16, 4.8 TB/s memory bandwidth — on AWS and Azure, billed per hour in Canadian dollars.
| SPECIFICATION | H200 SXM |
|---|---|
| Architecture | NVIDIA Hopper |
| GPU Memory | 141 GB HBM3e |
| Memory Bandwidth | 4.8 TB/s |
| FP16 / BF16 Performance | 989 TFLOPS |
| FP8 Performance | 1,979 TFLOPS |
| TF32 Performance | 494 TFLOPS |
| FP64 Performance | 33.5 TFLOPS |
| NVLink Bandwidth | 900 GB/s |
| PCIe Bandwidth | 128 GB/s |
| TDP (Power) | 700W |
| Transformer Engine | Yes — 2nd Gen |
| CUDA Cores | 16,896 |
| SScoreCompute Price | $4.10 CAD/hr |
| Cloud Platform | AWS + Azure |
Real-world throughput on SScoreCompute H200 SXM instances. LLaMA-3 70B, batch size 8.
On-demand, by the hour, in CAD. No contracts. No waitlists. Live in 60 seconds.