Hopper GPU - H100 Overview 
The Hopper GPU, introduced as Nvidia H100 Tensor Core GPU, is implemented using the Taiwanese semiconductor company TSMC’s 4N process customized for Nvidia with 80 billion transistors. The H100 architecture includes several noteworthy architectural advances.
The custom Nvidia H100 SXM5 module houses the H100 GPU and HBM3 RAM chips, and also provides connection to other systems via their fourth-generation NVLink and PCIe Gen 5 ports. (Figure 3). Note that these modules do not include display connectors, NVIDIA RT Cores for ray tracing acceleration, or an NVENC encoder since they, like the A100, are data center modules..
The GH100 GPU consists of up to 144 Streaming Multiprocessors (SM) per full GPU that have many performance and efficiency improvements over earlier versions.
Key new features useful for Scientific computing include: \cite{architecture} 
 An overview of comparing the V100, A100 and H100 architectures is shown in the following table.
Hopper Features useful for Scientific Computing
they do
not include display connectors, NVIDIA RT Cores for ray tracing acceleration, or an NVENC
encoder.