NVIDIA H100 Tensor Core GPU

  • Launched March 21st, 2023
  • Graphic Processor GH100
  • CORES 14592
  • TMUS 456
  • ROPS 24 
  • BUS WIDTH 5120 bit

  • $USD $40,600.00

    *RRP Pricing*

    To View Channel Discounts Please Login


NVIDIA H100 Tensor Core GPUUnprecedented performance, scalability, and security for every data centerAn Order-of-Magnitude Leap for Accelerated ComputingTap into unprecedented performance, scalability, and security for e

NVIDIA H100 Tensor Core GPU

Unprecedented performance, scalability, and security for every data center

An Order-of-Magnitude Leap for Accelerated Computing

Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA® H100 Tensor Core GPU. With the NVIDIA NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models. The H100’s combined technology innovations can speed up large language models (LLMs) by an incredible 30X over the previous generation to deliver industry-leading conversational AI.

Start configuring your GP-GPU Server now!

S6U | D54N-3U


AHG1 | ESC8000A-E12P

Supercharge Large Language Model Inference

For LLMs up to 175 billion parameters, the PCIe-based H100 NVL with NVLink bridge utilizes Transformer Engine, NVLink, and 188GB HBM3 memory to provide optimum performance and easy scaling across any data center, bringing LLMs to mainstream. Servers equipped with H100 NVL GPUs increase GPT-175B model performance up to 12X over NVIDIA DGX™ A100 systems while maintaining low latency in power-constrained data center environments.

Securely Accelerate Workloads From Enterprise to Exascale

Transformational AI Training

H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models. The combination of fourth-generation NVLink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NDR Quantum-2 InfiniBand networking, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO™ software delivers efficient scalability from small enterprise systems to massive, unified GPU clusters.

Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.

Real-Time Deep Learning Inference

AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks.

H100 extends NVIDIA’s market-leading inference leadership with several advancements that accelerate inference by up to 30X and deliver the lowest latency. Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, INT8, and now FP8, to reduce memory usage and increase performance while still maintaining accuracy for LLMs.

Exascale High-Performance Computing

The NVIDIA data center platform consistently delivers performance gains beyond Moore’s law. And H100’s new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.

H100 triples the floating-point operations per second (FLOPS) of double-precision Tensor Cores, delivering 60 teraflops of FP64 computing for HPC. AI-fused HPC applications can also leverage H100’s TF32 precision to achieve one petaflop of throughput for single-precision matrix-multiply operations, with zero code changes. 

H100 also features new DPX instructions that deliver 7X higher performance over A100 and 40X speedups over CPUs on dynamic programming algorithms such as Smith-Waterman for DNA sequence alignment and protein alignment for protein structure prediction.

Accelerated Data Analytics

Data analytics often consumes the majority of time in AI application development. Since large datasets are scattered across multiple servers, scale-out solutions with commodity CPU-only servers get bogged down by a lack of scalable computing performance.

Accelerated servers with H100 deliver the compute power—along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch™—to tackle data analytics with high performance and scale to support massive datasets. Combined with NVIDIA Quantum-2 InfiniBand, Magnum IO software, GPU-accelerated Spark 3.0, and NVIDIA RAPIDS™, the NVIDIA data center platform is uniquely able to accelerate these huge workloads with unparalleled levels of performance and efficiency.

Enterprise-Ready Utilization

IT managers seek to maximize utilization (both peak and average) of compute resources in the data center. They often employ dynamic reconfiguration of compute to right-size resources for the workloads in use.

Second-generation Multi-Instance GPU (MIG) technology in H100 maximizes the utilization of each GPU by securely partitioning it into as many as seven separate instances. With confidential computing support, H100 allows secure, end-to-end, multi-tenant usage, making it ideal for cloud service provider (CSP) environments.

H100 with MIG lets infrastructure managers standardize their GPU-accelerated infrastructure while having the flexibility to provision GPU resources with greater granularity to securely provide developers the right amount of accelerated compute and optimize usage of all their GPU resources.

Built-In Confidential Computing

Traditional confidential computing solutions are CPU-based, which is too limited for compute-intensive workloads like AI and HPC. NVIDIA Confidential Computing is a built-in security feature of the NVIDIA Hopper™ architecture that makes H100 the world’s first accelerator with confidential computing capabilities. Users can protect the confidentiality and integrity of their data and applications in use while accessing the unsurpassed acceleration of H100 GPUs. It creates a hardware-based trusted execution environment (TEE) that secures and isolates the entire workload running on a single H100 GPU, multiple H100 GPUs within a node, or individual MIG instances. GPU-accelerated applications can run unchanged within the TEE and don’t have to be partitioned. Users can combine the power of NVIDIA software for AI and HPC with the security of a hardware root of trust offered by NVIDIA Confidential Computing.

Unparalleled Performance for Large-Scale AI and HPC

The Hopper Tensor Core GPU will power the NVIDIA Grace Hopper CPU+GPU architecture, purpose-built for terabyte-scale accelerated computing and providing 10X higher performance on large-model AI and HPC. The NVIDIA Grace CPU leverages the flexibility of the Arm® architecture to create a CPU and server architecture designed from the ground up for accelerated computing. The Hopper GPU is paired with the Grace CPU using NVIDIA’s ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up to 30X higher aggregate system memory bandwidth to the GPU compared to today's fastest servers and up to 10X higher performance for applications running terabytes of data.

Start configuring your GP-GPU Server now!

Form FactorH100 SXMH100 PCIeH100 NVL1
FP6434 teraFLOPS26 teraFLOPS68 teraFLOPs
FP64 Tensor Core67 teraFLOPS51 teraFLOPS134 teraFLOPs
FP3267 teraFLOPS51 teraFLOPS134 teraFLOPs
TF32 Tensor Core989 teraFLOPS2756 teraFLOPS21,979 teraFLOPs2
BFLOAT16 Tensor Core1,979 teraFLOPS21,513 teraFLOPS23,958 teraFLOPs2
FP16 Tensor Core1,979 teraFLOPS21,513 teraFLOPS23,958 teraFLOPs2
FP8 Tensor Core3,958 teraFLOPS23,026 teraFLOPS27,916 teraFLOPs2
INT8 Tensor Core3,958 TOPS23,026 TOPS27,916 TOPS2
GPU memory80GB80GB188GB
GPU memory bandwidth3.35TB/s2TB/s7.8TB/s3
Decoders7 NVDEC
Max thermal design power (TDP)Up to 700W (configurable)300-350W (configurable)2x 350-400W
Multi-Instance GPUsUp to 7 MIGS @ 10GB eachUp to 14 MIGS @ 12GB
Form factorSXMPCIe
dual-slot air-cooled
2x PCIe
dual-slot air-cooled
InterconnectNVLink: 900GB/s PCIe Gen5: 128GB/sNVLink: 600GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server optionsNVIDIA HGX H100 Partner and NVIDIA-Certified Systems with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUsPartner and
NVIDIA-Certified Systems
with 1–8 GPUs
Partner and
NVIDIA-Certified Systems
with 2-4 pairs
NVIDIA AI EnterpriseAdd-onIncludedAdd-on
Datasheet 1
Title Version Date Size

Tags: NVIDIA, H100, 80Gb, GP, GPU, ADA Lovelace architecture, AI, Omniverse Enterprise, Rendering, 3D Graphics, Data Science