Nvidia announced the H200, the new best AI GPU with 141 GB of HBM3e memory
November 15, 2023

Nvidia announced the H200, the new best AI GPU with 141 GB of HBM3e memory

The best AI GPU will no longer be the H100, as Nvidia has announced the H200 which will be its first AI GPU to support HMB3e memory. Its primary upgrade is reflected in the higher bandwidth memory needed to accelerate generative artificial intelligence, large language models and HPC applications, so compared to the 80 GB that the H100 has, the new AI GPU will have 141 GB of HBM3e memory.

The Hopper H200 uses said memory running at 4.8 TB/s, which is roughly twice the capacity and 2.4 times the bandwidth of the top-of-the-line Nvidia Ampere A100 GPU. The new compute accelerator also gets an upgrade over its H100 predecessor with innovative software enhancements such as Nvidia TensorRT-LLM.

All of this should provide a significant performance boost, including tripling the inference speed of large language models such as Llama 2, an artificial intelligence model with 70 billion parameters.

Along with the new AI GPU, Nvidia also introduced the HGX H200, an AI platform based on the newly introduced Tensor Core GPUs. Thus, this platform, working on new graphics processors, should offer FP8 deep learning up to 32 petaflops as well as 1.1 TB of total high-bandwidth memory.

New upgrades allow the H200 to be installed in any data center, whether on-premises or in the cloud. The company announced that Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first to implement H200 chips in their cloud services as early as 2024.

Corporate partners such as ASRock, Asus, Dell, Gigabyte, HPE, Lenovo and others are reportedly slated to launch their own systems with the H200 over the next year as well. By the way, Nvidia buys memory from South Korean partner SK Hynix, which recently announced the delivery of a new LPDDR5T DRAM chip to mobile phone manufacturers.