Made for ChatGPT, NVIDIA Introduce H100 NVL Dual-GPU AI Accelerator

NVIDIA, a leading manufacturer of graphics processing units (GPU), has announced its latest GPU variant, the Hopper, which is specifically designed for Large Language Models (LLM) like Chat-GPT. This new variant, known as the H100 NVL, represents the best bin in NVIDIA’s Hopper range, designed with one purpose in mind: to enhance AI language models like Chat-GPT.

In technical terms, NVL stands for NVLink, which is the configuration used in the H100 GPU. This variant has several advantages over existing H100 GPUs, including its memory capacity. The H100 NVL uses six stacks of HBM3 memory, offering a total of 188 GB of high-speed buffer memory. This capacity is unusual, with only 94GB available on each GPU instead of the expected 96GB.

H100 NVL

The H100 NVL boasts a full 6144-bit memory interface (1024 bits per HBM3 stack) and a memory speed of up to 5.1 Gbps. This means that its maximum throughput is 7.8 GB/s, more than twice that of the H100 SXM. Large Language Models require large buffers and higher bandwidth, and the H100 NVL’s improved capabilities are sure to make a difference.

NVIDIA plans to launch the H100 NVL GPU in the second half of this year, without providing further details. The release of this new GPU will undoubtedly have an impact on the development of AI language models and their applications. The increased capacity and faster throughput will help accelerate the training and fine-tuning of these models, improving their accuracy and effectiveness.

As the demand for AI applications and language models continues to grow, it is encouraging to see companies like NVIDIA invest in developing specialized hardware to support these technologies. With the release of the H100 NVL GPU, we can expect even more progress in this field in the coming years.


You may get more up-to-date information about games and technology by visiting the Gametekis website, and if you want additional information, please follow our Facebook and Twitter pages.