The next generation of AI compute. 192GB HBM3e, 4,500 TFLOPS FP8, 8.0 TB/s memory bandwidth — 2× the VRAM and 4.5× the FP8 throughput of H100.
| SPECIFICATION | B300 BLACKWELL |
|---|---|
| Architecture | NVIDIA Blackwell |
| GPU Memory | 192 GB HBM3e |
| Memory Bandwidth | 8.0 TB/s |
| FP8 Performance | 4,500 TFLOPS |
| FP16 / BF16 Performance | 2,250 TFLOPS |
| FP4 Performance | 9,000 TFLOPS |
| NVLink Bandwidth | 1,800 GB/s |
| TDP (Power) | 1,000W |
| Transformer Engine | Yes — 4th Gen |
| Second-Gen Sparsity | Yes |
| SScoreCompute Price | $6.71 CAD/hr |
| Cloud Platform | AWS + Azure |
The most powerful per-GPU compute available on SScoreCompute. CAD pricing, no contracts, live in 60 seconds.