GPU Architecture: Hopper (based on GH100)
Memory: 141 GB HBM3e
Memory Bandwidth: ~4.8 TB/s
Bus Interface: PCIe Gen5 x16 (for NVL form)
Form Factor: Dual-slot, air-cooled, PCIe variant for data center use
FP64 Performance (Double Precision): ~30 TFLOPS (NVL)
FP64 Tensor Core: ~60 TFLOPS
FP32 Performance: ~60 TFLOPS
TF32 (Tensor Core): ~835 TFLOPS
BFLOAT16 & FP16 (Tensor Cores): ~1,671 TFLOPS
FP8 / INT8 (Tensor Cores): ~3,341 TFLOPS
Multi‑Instance GPU (MIG) Support: Up to 7 partitions (e.g. 7 MIG slices of ~16.5 GB each)
NVLink Interconnect: Supports 2‑ or 4‑way NVLink bridges (900 GB/s) for multi‑GPU setups
Thermal Design Power (TDP / Power Envelope): ~600 W (configurable) for the PCIe/NVL variant
Decoders / Video Engines: 7 NVDEC units + support for 7 JPEG decoders
Confidential Computing / Secure Execution Support: Supported
vGPU / Virtualization & MIG slicing: Multiple vGPU / virtual segmentation via MIG and virtualization features
No Display Outputs: As a data center GPU, it does not have video display ports (it’s not meant for connecting monitors)
Dimensions: ~267 mm length, 111 mm width, dual-slot thickness (PCIe variant)
Memory Bus Width: 6,144‑bit
Shading Units / Compute Units: 16,896 shader cores, 528 TMUs, 24 ROPs
PLEAE NOTE: The NVIDIA H200 GPU is subject to U.S. export regulations. It cannot be shipped to restricted countries, including China, Russia, and others specified under U.S. Department of Commerce rules. Buyers are responsible for ensuring compliance with all applicable export laws and regulations. No shipments will be shipped to these countries.