The world's most deployed AI GPU. 80GB HBM3, 989 TFLOPS FP16, 3.35 TB/s memory bandwidth — on AWS and Azure, billed per hour in Canadian dollars.
| SPECIFICATION | H100 SXM |
|---|---|
| Architecture | NVIDIA Hopper |
| GPU Memory | 80 GB HBM3 |
| Memory Bandwidth | 3.35 TB/s |
| FP16 / BF16 Performance | 989 TFLOPS |
| FP8 Performance | 1,979 TFLOPS |
| TF32 Performance | 494 TFLOPS |
| FP64 Performance | 33.5 TFLOPS |
| NVLink Bandwidth | 900 GB/s |
| PCIe Bandwidth | 128 GB/s |
| TDP (Power) | 700W |
| Transformer Engine | Yes — 2nd Gen |
| CUDA Cores | 16,896 |
| SScoreCompute Price | $3.14 CAD/hr |
| Cloud Platform | AWS + Azure |
Real-world throughput on SScoreCompute H100 SXM instances. LLaMA-3 70B, batch size 8.
On-demand, by the hour, in CAD. No contracts. No waitlists. Live in 60 seconds.