skip to main content

ThinkSystem NVIDIA H200 141GB GPUs

Product Guide

Home
Top
Author
Updated
16 Dec 2024
Form Number
LP1944
PDF size
19 pages, 223 KB

Abstract

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities.

This product guide provides essential presales information to understand the NVIDIA H200 GPU and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the GPUs and consider their use in IT solutions.

Change History

Changes in the December 16, 2024 update:

  • Removed the vGPU and Omniverse software part numbers as not supported with the H200 GPUs - NVIDIA GPU software section

Introduction

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. H200 is the newest addition to NVIDIA’s leading AI and high-performance data center GPU portfolio, bringing massive compute to data centers.

The NVIDIA H200 141GB 700W GPU is offered in the ThinkSystem SR680a V3 server, with eight SXM5 form factor GPU modules and NVIDIA® NVLink® Fabric to create an 8-FC (fully connected) NVLink topology per baseboard. The NVIDIA H200 GPU is also offered as a 4-GPU board in the ThinkSystem SD665-N V3 with four SXM5 GPU modules fully connected using NVLink connections.

Leveraging the power of H200 multi-precision Tensor Cores, an eight-way HGX H200 provides over 32 petaFLOPS of FP8 deep learning compute and over 1.1TB of aggregate HBM memory for the highest performance in generative AI and HPC applications.

ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board
Figure 1. ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board in the ThinkSystem SR680a V3 server

Did you know?

To maximize compute performance, H200 is the world’s first GPU with HBM3e memory with 4.8TB/s of memory bandwidth, a 1.4X increase over H100. H200 also expands GPU memory capacity nearly doubled to 141GB. The combination of faster and larger HBM memory accelerates performance of computationally intensive generative AI and HPC applications, while meeting the evolving demands of growing model sizes.

Part number information

The following table shows the part numbers for the 8-GPU and 4-GPU boards. The feature codes contain all H200 GPUs in the SXM form factor plus the NVLink high-speed interconnections between the GPUs.

The table also indicates which GPUs include a 5-year subscription to NVIDIA AI Enterprise Software (NVAIE).

Table 1. Ordering information
Part number Feature code Description Includes NVAIE* Controlled GPU status
HGX form factor GPUs
CTO only C1HM ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board No Controlled
CTO only C3V2 ThinkSystem NVIDIA HGX H200 141GB 700W 4-GPU Board No Controlled
Double-wide PCIe adapter form factor
CTO only C3V3 ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU Included* Controlled
CTO only C3V0 ThinkSystem NVIDIA 4-way bridge for H200 NVL - -

* ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU, C3V3 includes a 5-year subscription to NVIDIA AI Enterprise Software (NVAIE). See the NVIDIA AI Enterprise Software section.

The NVIDIA H200 GPU is Controlled which means the GPU is not offered in certain markets, as determined by the US Government.

Features

The NVIDIA H200 Tensor Core GPU supercharges generative AI and HPC with game-changing performance and memory capabilities. As the first GPU with HBM3e, H200’s faster, larger memory fuels the acceleration of generative AI and LLMs while advancing scientific computing for HPC workloads.

NVIDIA HGX™ H200, the world’s leading AI computing platform, features the H200 GPU for the fastest performance. An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.

Key AI and HPC workload features:

  • Unlock Insights With High-Performance LLM Inference

    In the ever-evolving landscape of AI, businesses rely on large language models to address a diverse range of inference needs. An AI inference accelerator must deliver the highest throughput at the lowest TCO when deployed at scale for a massive user base. H200 doubles inference performance compared to H100 when handling LLMs such as Llama2 70B.

  • Optimize Generative AI Fine-Tuning Performance

    Large language models can be customized to specific business case needs with fine-tuning, low-rank adaptation (LoRA), or retrieval-augmented generation (RAG) methods. These methods bridge the gap between general pretrained results and task-specific solutions, making them essential tools for industry and research applications.

    NVIDIA H200’s Transformer Engine and fourth-generation Tensor Cores speed up fine-tuning by 5.5X over A100 GPUs. This performance increase allows enterprises and AI practitioners to quickly optimize and deploy generative AI to benefit their business. Compared to fully training foundation models from scratch, fine-tuning offers better energy efficiency and the fastest access to customized solutions needed to grow business.

  • Industry-Leading Generative AI Training

    The era of generative AI has arrived, and it requires billion-parameter models to take on the paradigm shift in business operations and customer experiences.

    NVIDIA H200 GPUs feature the Transformer Engine with FP8 precision, which provides up to 5X faster training over A100 GPUs for large language models such as GPT-3 175B. The combination of fourth-generation NVLink, which offers 900GB/s of GPU-to-GPU interconnect, PCIe Gen5, and NVIDIA Magnum IO™ software, delivers efficient scalability from small enterprise to massive unified computing clusters of GPUs. These infrastructure advances, working in tandem with the NVIDIA AI Enterprise software suite, make the NVIDIA H200 the most powerful end-to-end generative AI and HPC data center platform.

  • Supercharged High-Performance Computing

    Memory bandwidth is crucial for high-performance computing applications, as it enables faster data transfer and reduces complex processing bottlenecks. For memory-intensive HPC applications like simulations, scientific research, and artificial intelligence, H200’s higher memory bandwidth ensures that data can be accessed and manipulated efficiently, leading to up to a 110X faster time to results.

    The NVIDIA data center platform consistently delivers performance gains beyond Moore’s Law. And H200’s breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.

  • Reduced Energy and TCO

    In a world where energy conservation and sustainability are top of mind, the concerns of business leaders and enterprises have evolved. Enter accelerated computing, a leader in energy efficiency and TCO, particularly for workloads that thrive on acceleration, such as HPC and generative AI.

    With the introduction of H200, energy efficiency and TCO reach new levels. This cutting-edge technology offers unparalleled performance, all within the same power profile as H100. AI factories and at-scale supercomputing systems that are not only faster but also more eco-friendly deliver an economic edge that propels the AI and scientific community forward. For at-scale deployments, H200 systems provide 5X more energy savings and 4X better cost of ownership savings over the NVIDIA Ampere architecture generation.

Technical specifications

The following table lists the NVIDIA H200 GPU specifications.

Table 2. GPU specifications
Specification NVIDIA H200 SXM NVIDIA H200 NVL PCIe
Form Factor SXM5 DW PCIe
FP64 34 teraFLOPS 30 teraFLOPS
FP64 Tensor Core 67 teraFLOPS 60 teraFLOPS
FP32 67 teraFLOPS 60 teraFLOPS
TF32 Tensor Core 495 / 989 teraFLOPS* 418 / 835 teraFLOPS*
BFLOAT16 Tensor 990 / 1,979 teraFLOPS* 836 / 1,671 teraFLOPS*
FP16 Tensor Core 990 / 1,979 teraFLOPS* 836 / 1,671 teraFLOPS*
FP8 Tensor Core 1,979 / 3,958 teraFLOPS* 1,570 / 3,341 teraFLOPS*
INT8 Tensor Core 1,979 / 3,958 TOPS* 1,570 / 3,341 TOPS*
GPU Memory 141 GB HBM3e 141 GB HBM3e
GPU Memory Bandwidth 4.8 TB/s 4.8 TB/s
Total Graphics Power (TGP) or Continuous Electrical Design Point (EDPc) 700W 600W
Multi-Instance GPUs Up to 7 MIGS @ 18 GB Up to 7 MIGS @ 16.5 GB
Interconnect NVLink: 900 GB/s
PCIe Gen5: 128 GB/s
NVLink: 900 GB/s
PCIe Gen5: 128 GB/s

* Without / with structural sparsity enabled

Server support

The following tables list the ThinkSystem servers that are compatible.

Table 3. Server support (Part 1 of 4)
Part Number Description AMD V3 2S Intel V3/V4 4S 8S Intel V3 Multi Node V3/V4 1S V3
SR635 V3 (7D9H / 7D9G)
SR655 V3 (7D9F / 7D9E)
SR645 V3 (7D9D / 7D9C)
SR665 V3 (7D9B / 7D9A)
ST650 V3 (7D7B / 7D7A)
SR630 V3 (7D72 / 7D73)
SR650 V3 (7D75 / 7D76)
SR630 V4 (7DG8 / 7DG9)
SR850 V3 (7D97 / 7D96)
SR860 V3 (7D94 / 7D93)
SR950 V3 (7DC5 / 7DC4)
SD535 V3 (7DD8 / 7DD1)
SD530 V3 (7DDA / 7DD3)
SD550 V3 (7DD9 / 7DD2)
ST45 V3 (7DH4 / 7DH5)
ST50 V3 (7DF4 / 7DF3)
ST250 V3 (7DCF / 7DCE)
SR250 V3 (7DCM / 7DCL)
C1HM ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board N N N N N N N N N N N N N N N N N N
C3V2 ThinkSystem NVIDIA HGX H200 141GB 700W 4-GPU Board N N N N N N N N N N N N N N N N N N
C3V3 ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU N N N N N N N N N N N N N N N N N N
Table 4. Server support (Part 2 of 4)
Part Number Description GPU Rich Edge Super Computing 1S Intel V2
SR670 V2 (7Z22 / 7Z23)
SR675 V3 (7D9Q / 7D9R)
SR680a V3 (7DHE)
SR685a V3 (7DHC)
SR780a V3 (7DJ5)
SE350 (7Z46 / 7D1X)
SE350 V2 (7DA9)
SE360 V2 (7DAM)
SE450 (7D8T)
SE455 V3 (7DBY)
SC750 V4 (7DDJ)
SC777 V4 (7DKA)
SD665 V3 (7D9P)
SD665-N V3 (7DAZ)
SD650 V3 (7D7M)
SD650-I V3 (7D7L)
SD650-N V3 (7D7N)
ST50 V2 (7D8K / 7D8J)
ST250 V2 (7D8G / 7D8F)
SR250 V2 (7D7R / 7D7Q)
C1HM ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board N N 11 11 N N N N N N N N N N N N N N N N
C3V2 ThinkSystem NVIDIA HGX H200 141GB 700W 4-GPU Board N 1 N N N N N N N N N N N 1 N N 1 N N N
C3V3 ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU N 8 N N N N N N N N N N N N N N N N N N
  1. Contains 8 separate GPUs connected via high-speed interconnects
Table 5. Server support (Part 3 of 4)
Part Number Description 2S Intel V2 AMD V1 Dense V2 4S V2 8S 4S V1
ST650 V2 (7Z75 / 7Z74)
SR630 V2 (7Z70 / 7Z71)
SR650 V2 (7Z72 / 7Z73)
SR635 (7Y98 / 7Y99)
SR655 (7Y00 / 7Z01)
SR655 Client OS
SR645 (7D2Y / 7D2X)
SR665 (7D2W / 7D2V)
SD630 V2 (7D1K)
SD650 V2 (7D1M)
SD650-N V2 (7D1N)
SN550 V2 (7Z69)
SR850 V2 (7D31 / 7D32)
SR860 V2 (7Z59 / 7Z60)
SR950 (7X11 / 7X12)
SR850 (7X18 / 7X19)
SR850P (7D2F / 2D2G)
SR860 (7X69 / 7X70)
C1HM ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board N N N N N N N N N N N N N N N N N N
C3V2 ThinkSystem NVIDIA HGX H200 141GB 700W 4-GPU Board N N N N N N N N N N N N N N N N N N
C3V3 ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU N N N N N N N N N N N N N N N N N N
Table 6. Server support (Part 4 of 4)
Part Number Description 1S Intel V1 2S Intel V1 Dense V1
ST50 (7Y48 / 7Y50)
ST250 (7Y45 / 7Y46)
SR150 (7Y54)
SR250 (7Y52 / 7Y51)
ST550 (7X09 / 7X10)
SR530 (7X07 / 7X08)
SR550 (7X03 / 7X04)
SR570 (7Y02 / 7Y03)
SR590 (7X98 / 7X99)
SR630 (7X01 / 7X02)
SR650 (7X05 / 7X06)
SR670 (7Y36 / 7Y37)
SD530 (7X21)
SD650 (7X58)
SN550 (7X16)
SN850 (7X15)
C1HM ThinkSystem NVIDIA HGX H200 141GB 700W 8-GPU Board N N N N N N N N N N N N N N N N
C3V2 ThinkSystem NVIDIA HGX H200 141GB 700W 4-GPU Board N N N N N N N N N N N N N N N N
C3V3 ThinkSystem NVIDIA H200 NVL 141GB PCIe GPU Gen5 Passive GPU N N N N N N N N N N N N N N N N

Operating system support

Operating system support is based on that of the supported servers. See the SR680a V3 server product guide for details: https://lenovopress.lenovo.com/lp1909-thinksystem-sr680a-v3-server

NVIDIA GPU software

This section lists the NVIDIA software that is available from Lenovo.

NVIDIA AI Enterprise Software

Lenovo offers the NVIDIA AI Enterprise (NVAIE) cloud-native enterprise software. NVIDIA AI Enterprise is an end-to-end, cloud-native suite of  AI and data analytics software, optimized, certified, and supported by NVIDIA to run on VMware vSphere and bare-metal  with NVIDIA-Certified  Systems™.  It includes key enabling technologies from NVIDIA for rapid deployment, management, and scaling of AI workloads in the modern hybrid cloud.

NVIDIA AI Enterprise is licensed on a per-GPU basis. NVIDIA AI Enterprise products can be purchased as either a perpetual license with support services, or as an annual or multi-year subscription.

  • The perpetual license provides the right to use the NVIDIA AI Enterprise software indefinitely, with no expiration. NVIDIA AI Enterprise with perpetual licenses must be purchased in conjunction with one-year, three-year, or five-year support services. A one-year support service is also available for renewals.
  • The subscription offerings are an affordable option to allow IT departments to better manage the flexibility of license volumes. NVIDIA AI Enterprise software products with subscription includes support services for the duration of the software’s subscription license

The features of NVIDIA AI Enterprise Software are listed in the following table.

Table 7. Features of NVIDIA AI Enterprise Software (NVAIE)
Features Supported in NVIDIA AI Enterprise
Per GPU Licensing Yes
Compute Virtualization Supported
Windows Guest OS Support No support
Linux Guest OS Support Supported
Maximum Displays 1
Maximum Resolution 4096 x 2160 (4K)
OpenGL and Vulkan In-situ Graphics only
CUDA and OpenCL Support Supported
ECC and Page Retirement Supported
MIG GPU Support Supported
Multi-vGPU Supported
NVIDIA GPUDirect Supported
Peer-to-Peer over NVLink Supported
GPU Pass Through Support Supported
Baremetal Support Supported
AI and Data Science applications and Frameworks Supported
Cloud Native ready Supported

Note: Maximum 10 concurrent VMs per product license

The following table lists the ordering part numbers and feature codes.

Table 8. NVIDIA AI Enterprise Software (NVAIE)
Part number Feature code
7S02CTO1WW
NVIDIA part number Description
AI Enterprise Perpetual License  
7S02001BWW S6YY 731-AI7004+P3CMI60 NVIDIA AI Enterprise Perpetual License and Support per GPU Socket, 5 Years
7S02001EWW S6Z1 731-AI7004+P3EDI60 NVIDIA AI Enterprise Perpetual License and Support per GPU Socket, EDU, 5 Years
AI Enterprise Subscription License  
7S02001FWW S6Z2 731-AI7003+P3CMI12 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, 1 Year
7S02001GWW S6Z3 731-AI7003+P3CMI36 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, 3 Years
7S02001HWW S6Z4 731-AI7003+P3CMI60 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, 5 Years
7S02001JWW S6Z5 731-AI7003+P3EDI12 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, EDU, 1 Year
7S02001KWW S6Z6 731-AI7003+P3EDI36 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, EDU, 3 Years
7S02001LWW S6Z7 731-AI7003+P3EDI60 NVIDIA AI Enterprise Subscription License and Support per GPU Socket, EDU, 5 Years

Find more information in the NVIDIA AI Enterprise Sizing Guide.

NVIDIA HPC Compiler Software

Table 9. NVIDIA HPC Compiler
Part number Feature code
7S09CTO6WW
Description
HPC Compiler Support Services
7S090014WW S924 NVIDIA HPC Compiler Support Services, 1 Year
7S090015WW S925 NVIDIA HPC Compiler Support Services, 3 Years
7S09002GWW S9UQ NVIDIA HPC Compiler Support Services, 5 Years
7S090016WW S926 NVIDIA HPC Compiler Support Services, EDU, 1 Year
7S090017WW S927 NVIDIA HPC Compiler Support Services, EDU, 3 Years
7S09002HWW S9UR NVIDIA HPC Compiler Support Services, EDU, 5 Years
7S090018WW S928 NVIDIA HPC Compiler Support Services - Additional Contact, 1 Year
7S09002JWW S9US NVIDIA HPC Compiler Support Services - Additional Contact, 3 Years
7S09002KWW S9UT NVIDIA HPC Compiler Support Services - Additional Contact, 5 Years
7S090019WW S929 NVIDIA HPC Compiler Support Services - Additional Contact, EDU, 1 Year
7S09002LWW S9UU NVIDIA HPC Compiler Support Services - Additional Contact, EDU, 3 Years
7S09002MWW S9UV NVIDIA HPC Compiler Support Services - Additional Contact, EDU, 5 Years
HPC Compiler Premier Support Services
7S09001AWW S92A NVIDIA HPC Compiler Premier Support Services, 1 Year
7S09002NWW S9UW NVIDIA HPC Compiler Premier Support Services, 3 Years
7S09002PWW S9UX NVIDIA HPC Compiler Premier Support Services, 5 Years
7S09001BWW S92B NVIDIA HPC Compiler Premier Support Services, EDU, 1 Year
7S09002QWW S9UY NVIDIA HPC Compiler Premier Support Services, EDU, 3 Years
7S09002RWW S9UZ NVIDIA HPC Compiler Premier Support Services, EDU, 5 Years
7S09001CWW S92C NVIDIA HPC Compiler Premier Support Services - Additional Contact, 1 Year
7S09002SWW S9V0 NVIDIA HPC Compiler Premier Support Services - Additional Contact, 3 Years
7S09002TWW S9V1 NVIDIA HPC Compiler Premier Support Services - Additional Contact, 5 Years
7S09001DWW S92D NVIDIA HPC Compiler Premier Support Services - Additional Contact, EDU, 1 Year
7S09002UWW S9V2 NVIDIA HPC Compiler Premier Support Services - Additional Contact, EDU, 3 Years
7S09002VWW S9V3 NVIDIA HPC Compiler Premier Support Services - Additional Contact, EDU, 5 Years

Regulatory approvals

The NVIDIA H200 GPU has the following regulatory approvals:

  • RCM
  • BSMI
  • CE
  • FCC
  • ICES
  • KCC
  • cUL, UL
  • VCCI

Warranty

The NVIDIA H200 GPU assumes the server’s base warranty and any warranty upgrades.

Seller training courses

The following sales training courses are offered for employees and partners (login required). Courses are listed in date order.

  1. Partner Technical Webinar - NVIDIA Portfolio
    2024-11-06 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar - NVIDIA Portfolio

    in this 60-minute replay, Jason Knudsen of NVIDIA presented the NVIDIA Computing Platform. Jason talked about the full portfolio from GPUs to Networking to AI Enterprise and NIMs.

    Published: 2024-11-06
    Length: 60 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: 110124
  2. NVIDIA Data Center GPU Portfolio
    2024-09-26 | 11 minutes | Employees and Partners
    Details
    NVIDIA Data Center GPU Portfolio

    This course equips Lenovo and partner technical sellers with the knowledge to effectively communicate the positioning of NVIDIA's data center GPU portfolio, enhancing your ability to showcase its key advantages to clients.

    Upon completion of this training, you will be familiar with the following:
    • Data Center GPUs for AI and HPC
    • Data Center GPUs for Graphics
    • GPU comparisons

    Published: 2024-09-26
    Length: 11 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD201
  3. Q2 Solutions Launch TruScale GPU Next Generation Management in the AI Era Quick Hit
    2024-09-10 | 6 minutes | Employees and Partners
    Details
    Q2 Solutions Launch TruScale GPU Next Generation Management in the AI Era Quick Hit

    This Quick Hit focuses on Lenovo announcing additional ways to help you build, scale, and evolve your customer’s private AI faster for improved ROI with TruScale GPU as a Service, AI-driven systems management, and infrastructure transformation services.

    Published: 2024-09-10
    Length: 6 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: SXXW2543a
  4. VTT AI: The NetApp AIPod with Lenovo for NVIDIA OVX
    2024-08-13 | 38 minutes | Employees and Partners
    Details
    VTT AI: The NetApp AIPod with Lenovo for NVIDIA OVX

    AI, for some organizations, is out of reach, due to cost, integration complexity, and time to deployment. Previously, organizations relied on frequently retraining their LLMs with the latest data, a costly and time-consuming process. The NetApp AIPod with Lenovo for NVIDIA OVX combines NVIDIA-Certified OVX Lenovo ThinkSystem SR675 V3 servers with validated NetApp storage to create a converged infrastructure specifically designed for AI workloads. Using this solution, customers will be able to conduct AI RAG and inferencing operations for use cases like chatbots, knowledge management, and object recognition.

    Topics covered in this VTT session include:
    •  Where Lenovo fits in the solution
    •  NetApp AIPod with Lenovo for NVIDIA OVX Solution Overview
    •  Challenges/pain points that this solution solves for enterprises deploying AI
    •  Solution value/benefits of the combined NetApp, Lenovo, and NVIDIA OVX-Certified Solution

    Published: 2024-08-13
    Length: 38 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DVAI206
  5. Introduction to Artificial Intelligence
    2024-08-02 | 11 minutes | Employees and Partners
    Details
    Introduction to Artificial Intelligence

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!


    This NVIDIA course aims to answer questions such as:

    • What is AI?
    • Why are enterprises so interested in it?
    • How does AI happen?
    • Why are GPUs so important for it?
    • What does a good AI solution look like?


    Course Objectives:

    By the end of this training, you should be able to:
    1. Describe AI on a high level and list a few common enterprise use cases
    2. List how enterprises benefit from AI
    3. Distinguish between Training and Inference
    4. Say how GPUs address known bottlenecks in a typical AI pipeline
    5. Tell a customer why NVIDIA’s AI solutions are well-respected in the market

    Published: 2024-08-02
    Length: 11 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD104r2
  6. GPU Fundamentals
    2024-08-02 | 10 minutes | Employees and Partners
    Details
    GPU Fundamentals

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”,
    please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding.

    This NVIDIA course introduces you to two devices that a computer typically uses to process information – the CPU and the GPU. We’ll discuss their differences and look at how the GPU overcomes the limitations of the CPU. We will also talk about the value GPUs bring to modern-day enterprise computing.

    Course Objectives:

    By the end of this training, you should be able to:
    1. Distinguish between serial and parallel processing
    2. Explain what a GPU is and what it does at a high level
    3. Articulate the value of GPU computing for enterprises
    4. List three typical GPU-accelerated workloads and a few uses cases
    5. Recommend the appropriate NVIDIA GPU for its corresponding enterprise computing workloads

    Published: 2024-08-02
    Length: 10 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD103r2
  7. Key NVIDIA Use Cases for Industry Verticals
    2024-08-02 | 32 minutes | Employees and Partners
    Details
    Key NVIDIA Use Cases for Industry Verticals

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”,
    please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding.

    In this NVIDIA course, you will learn about key AI use cases driving innovation and change across Automotive, Financial Services, Energy, Healthcare, Higher Education, Manufacturing, Retail and Telco industries.


    Course Objectives:
    By the end of this training, you should be able to:
    1. Discuss common AI use cases across a broad range of industry verticals
    2. Explain how NVIDIA’s AI software stack speeds up time to production for AI projects in multiple industry verticals

    Published: 2024-08-02
    Length: 32 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD108
  8. Generative AI Overview
    2024-08-02 | 17 minutes | Employees and Partners
    Details
    Generative AI Overview

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!


    Since ChatGPTs debut in November of 2022, it has become clear that Generative AI has the potential to revolutionize many aspects of our personal and professional lives. This NVIDIA course aims to answer questions such as:

    • What are the Generative AI market trends?
    • What is generative AI and how does it work?


    Course Objectives:

    By the end of this training, you should be able to:
    1. Discuss the Generative AI market trends and the challenges in this space with your customers.
    2. Explain what Generative AI is and how the technology works to help enterprises to unlock new opportunities for the business.
    3. Present a high-level overview of the steps involved in building a Generative AI application.

    Published: 2024-08-02
    Length: 17 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD106r2
  9. Retrieval Augmented Generation
    2024-08-02 | 15 minutes | Employees and Partners
    Details
    Retrieval Augmented Generation

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!


    In this NVIDIA course, Dave Barry, Senior Solutions Architect, talks about a technique known as Retrieval Augmented Generation (RAG). It is a powerful tool for enhancing the accuracy and reliability of Generative AI models with facts fetched from external sources.

    This course requires prior knowledge of Generative AI concepts, such as the difference between model training and inference. Please refer to relevant courses within this curriculum.


    Course Objectives:

    By the end of this training, you should be able to:
    1. Explain the limitations of large language models to customers
    2. Articulate the value of RAG to enterprises
    3. Demo an NVIDIA RAG workflow with a video
    4. Drive TCO conversations using an authentic use case

    Published: 2024-08-02
    Length: 15 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD107
  10. AI Industry Use Cases & Solutions
    2024-08-02 | 25 minutes | Employees and Partners
    Details
    AI Industry Use Cases & Solutions

    IMPORTANT: If you receive the following error message:
    "There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!


    This NVIDIA course aims to answer the question:

    • How does NVIDIA bring AI solutions to market with and through the partner ecosystem?


    Course Objectives:

    By the end of this training, you should be able to:
    1. Think of solutions in terms of an industry and use case approach
    2. Develop solutions that address the industry-specific challenges (with FSI as the illustrative model)
    3. Engage customers with their conversations and advance deals with stakeholder’s concerns in mind
    4. Replicate NVIDIA’s best practices and ecosystem engagement strategies appropriately

    Published: 2024-08-02
    Length: 25 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DAINVD105r2
  11. Partner Technical Webinar - NVIDIA Smart Spaces
    2024-07-24 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar - NVIDIA Smart Spaces

    In this 60-minute replay, Alex Pazos, NVIDIA BDM for Smart Spaces, reviewed the NVIDIA AI for Smart Spaces framework and use cases. Alex reviewed the Metropolus Framework and the Smart Spaces ecosystem. Then he reviewed several use cases including sports stadiums, warehouses, airports, and roadways.

    Published: 2024-07-24
    Length: 60 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: 071924
  12. Guidance for Selling NVIDIA Products at Lenovo for ISG
    2024-07-01 | 25 minutes | Employees and Partners
    Details
    Guidance for Selling NVIDIA Products at Lenovo for ISG

    This course gives key talking points about the Lenovo and NVIDIA partnership in the Data Center. Details are included on where to find the products that are included in the partnership and what to do if NVIDIA products are needed that are not included in the partnership. Contact information is included if help is needed in choosing which product is best for your customer. At the end of this session sellers should be able to explain the Lenovo and NVIDIA partnership, describe the products Lenovo can sell through the partnership with NVIDIA, help a customer purchase other NVIDIA product, and get assistance with choosing NVIDIA products to fit customer needs.

    Published: 2024-07-01
    Length: 25 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: DNVIS102
  13. Think AI Weekly: Lenovo AI PCs & AI Workstations
    2024-05-23 | 60 minutes | Employees Only
    Details
    Think AI Weekly: Lenovo AI PCs & AI Workstations

    Join Mike Leach, Sr. Manager, Workstations Solutions and Pooja Sathe, Director Commercial AI PCs as they discuss why Lenovo AI Developer Workstations and AI PCs are the most powerful, where they fit into the device to cloud ecosystem, and this week’s Microsoft announcement, Copilot+PC

    Published: 2024-05-23
    Length: 60 minutes

    Start the training:
    Employee link: Grow@Lenovo

    Course code: DTAIW105
  14. VTT Cloud Architecture: NVIDIA Using Cloud for GPUs and AI
    2024-05-22 | 60 minutes | Employees Only
    Details
    VTT Cloud Architecture: NVIDIA Using Cloud for GPUs and AI

    Join JD Dupont, NVIDIA Head of Americas Sales, Lenovo partnership and Veer Mehta, NVIDIA Solution Architect on an interactive discussion about cloud to edge, designing cloud Solutions with NVIDIA GPUs and minimizing private\hybrid cloud OPEX with GPUs. Discover how you can use what is done at big public cloud providers for your customers. We will also walk through use cases and see a demo you can use to help your customers.

    Published: 2024-05-22
    Length: 60 minutes

    Start the training:
    Employee link: Grow@Lenovo

    Course code: DVCLD212
  15. Partner Technical Webinar - NVidia
    2023-12-11 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar - NVidia

    In this 60-minute replay, Brad Davidson of Nvidia will help us recognize AI Trends, and Discuss Industry Verticals Marketing.

    Published: 2023-12-11
    Length: 60 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: 120823

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ServerProven®
ThinkAgile®
ThinkSystem®

The following terms are trademarks of other companies:

AMD is a trademark of Advanced Micro Devices, Inc.

Intel® is a trademark of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Windows® is a trademark of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.