Author
Updated
9 Aug 2024Form Number
LP1732PDF size
27 pages, 707 KBAbstract
The ThinkSystem NVIDIA H100 GPU delivers unprecedented performance, scalability, and security for every workload. The GPUs use breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30X over the previous generation.
This product guide provides essential presales information to understand the NVIDIA H100 GPU and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the GPUs and consider their use in IT solutions.
Change History
Changes in the August 9, 2024 update:
- The following GPU is withdrawn from marketing:
- ThinkSystem NVIDIA H100 80GB PCIe Gen5 Passive GPU, 4X67A82257
Introduction
The ThinkSystem NVIDIA H100 GPU delivers unprecedented performance, scalability, and security for every workload. The GPUs use breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30X over the previous generation.
The NVIDIA H100 GPU features fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, further extending NVIDIA’s market-leading AI leadership with up to 9X faster training and an incredible 30X inference speedup on large language models. For high-performance computing (HPC) applications, The GPUs triple the floating-point operations per second (FLOPS) of FP64 and add dynamic programming (DPX) instructions to deliver up to 7X higher performance.
The following figure shows the ThinkSystem NVIDIA H100 GPU in the double-width PCIe adapter form factor.
Figure 1. ThinkSystem NVIDIA H100 NVL 94GB PCIe Gen5 Passive GPU
Did you know?
The NVIDIA H100 family is available in both double-wide PCIe adapter form factor and in SXM form factor. The latter is used in Lenovo's Neptune direct-water-cooled ThinkSystem SD665-N V3 server for the ultimate in GPU performance and heat management.
The NVIDIA H100 NVL Tensor Core GPU is optimized for Large Language Model (LLM) Inferences, with its high compute density, high memory bandwidth, high energy efficiency, and unique NVLink architecture.
Part number information
The following table shows the part numbers for the ThinkSystem NVIDIA H100 GPU.
Not available in China, Hong Kong and Macau: The H100 GPUs are not available in China, Hong Kong and Macau. For these markets, the H800 is avalable. See the NVIDIA H800 product guide for details, https://lenovopress.lenovo.com/LP1814.
The PCIe option part numbers includes the following:
- One GPU with full-height (3U) adapter bracket attached
- Documentation
The following figure shows the NVIDIA H100 SXM5 8-GPU Board with heatsinks installed in the ThinkSystem SR680a V3 and ThinkSystem SR685a V3 servers.
Figure 2. NVIDIA H100 SXM5 8-GPU Board in the ThinkSystem SR680a V3 and SR685a V3 servers
Features
The ThinkSystem NVIDIA H100 GPU delivers high performance, scalability, and security for every workload. The GPU uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X over the previous generation.
The PCIe versions of the NVIDIA H100 GPUs include a five-year software subscription, with enterprise support, to the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance. This ensures organizations have access to the AI frameworks and tools they need to build accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more.
The NVIDIA H100 GPU features fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, further extending NVIDIA’s market-leading AI leadership with up to 9X faster training and an incredible 30X inference speedup on large language models. For high-performance computing (HPC) applications, the GPU triples the floating-point operations per second (FLOPS) of FP64 and adds dynamic programming (DPX) instructions to deliver up to 7X higher performance. With second-generation Multi-Instance GPU (MIG), built-in NVIDIA confidential computing, and NVIDIA NVLink Switch System, the NVIDIA H100 GPU securely accelerates all workloads for every data center from enterprise to exascale.
Key features of the NVIDIA H100 GPU:
- NVIDIA H100 Tensor Core GPU
Built with 80 billion transistors using a cutting-edge TSMC 4N process custom tailored for NVIDIA’s accelerated compute needs, H100 is the world’s most advanced chip ever built. It features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data center scale.
- Transformer Engine
The Transformer Engine uses software and Hopper Tensor Core technology designed to accelerate training for models built from the world’s most important AI model building block, the transformer. Hopper Tensor Cores can apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers.
- NVLink Switch System
The NVLink Switch System enables the scaling of multi-GPU input/output (IO) across multiple servers. The system delivers up to 9X higher bandwidth than InfiniBand HDR on the NVIDIA Ampere architecture.
- NVIDIA Confidential Computing
NVIDIA Confidential Computing is a built-in security feature of Hopper that makes NVIDIA H100 the world’s first accelerator with confidential computing capabilities. Users can protect the confidentiality and integrity of their data and applications in use while accessing the unsurpassed acceleration of H100 GPUs.
- Second-Generation Multi-Instance GPU (MIG)
The Hopper architecture’s second-generation MIG supports multi-tenant, multi-user configurations in virtualized environments, securely partitioning the GPU into isolated, right-size instances to maximize quality of service (QoS) for 7X more secured tenants.
- DPX Instructions
Hopper’s DPX instructions accelerate dynamic programming algorithms by 40X compared to CPUs and 7X compared to NVIDIA Ampere architecture GPUs. This leads to dramatically faster times in disease diagnosis, real-time routing optimizations, and graph analytics.
The following figure shows the NVIDIA H100 SXM5 4-GPU Board installed in the ThinkSystem SD665-N V3 server
Figure 3. NVIDIA H100 SXM5 4-GPU Board in the ThinkSystem SD665-N V3 server
Technical specifications
The following table lists the GPU processing specifications and performance of the NVIDIA H100 GPU.
* With structural sparsity enabled
Server support
The following tables list the ThinkSystem servers that are compatible.
NVLink server support: The NVLink Ampere bridge is supported with additional NVIDIA A-series and H-series GPUs. As a result, there are additional servers listed as supporting the bridge that don't support the H100 GPU.
- Contains 8 separate GPUs connected via high-speed interconnects
- Contains 4 separate GPUs connected via high-speed interconnects
- Contains 4 separate GPUs connected via high-speed interconnects
Operating system support
The following table lists the supported operating systems.
Tip: These tables are automatically generated based on data from Lenovo ServerProven.
1 Ubuntu 22.04.3 LTS/Ubuntu 22.04.4 LTS
2 For limitation, please refer Support Tip TT1064
3 For limitation, please refer Support Tip TT1591
4 HW is not supported with EPYC 7002 processors.
NVIDIA GPU software
This section lists the NVIDIA software that is available from Lenovo.
- NVIDIA vGPU Software (vApps, vPC, RTX vWS)
- NVIDIA Omniverse Software (OVE)
- NVIDIA AI Enterprise Software
- NVIDIA HPC Compiler Software
The PCIe adapter H100 GPUs include a five-year software subscription, including enterprise support, to the NVIDIA AI Enterprise software suite:
- ThinkSystem NVIDIA H100 NVL 94GB PCIe Gen5 Passive GPU, 4X67A89325
- ThinkSystem NVIDIA H100 80GB PCIe Gen5 Passive GPU, 4X67A82257
This license is equivalent to part number 7S02001HWW listed in the NVIDIA AI Enterprise Software section below.
To activate the NVIDIA AI Enterprise license, see the following page:
https://www.nvidia.com/en-us/data-center/activate-license/
SXM GPUs: The NVIDIA AI Enterprise software suite is not included with the SXM H100 GPUs and will need to ordered separately if needed.
NVIDIA vGPU Software (vApps, vPC, RTX vWS)
Lenovo offers the following virtualization software for NVIDIA GPUs:
- Virtual Applications (vApps)
For organizations deploying Citrix XenApp, VMware Horizon RDSH or other RDSH solutions. Designed to deliver PC Windows applications at full performance. NVIDIA Virtual Applications allows users to access any Windows application at full performance on any device, anywhere. This edition is suited for users who would like to virtualize applications using XenApp or other RDSH solutions. Windows Server hosted RDSH desktops are also supported by vApps.
- Virtual PC (vPC)
This product is ideal for users who want a virtual desktop but need great user experience leveraging PC Windows® applications, browsers and high-definition video. NVIDIA Virtual PC delivers a native experience to users in a virtual environment, allowing them to run all their PC applications at full performance.
- NVIDIA RTX Virtual Workstation (RTX vWS)
NVIDIA RTX vWS is the only virtual workstation that supports NVIDIA RTX technology, bringing advanced features like ray tracing, AI-denoising, and Deep Learning Super Sampling (DLSS) to a virtual environment. Supporting the latest generation of NVIDIA GPUs unlocks the best performance possible, so designers and engineers can create their best work faster. IT can virtualize any application from the data center with an experience that is indistinguishable from a physical workstation — enabling workstation performance from any device.
The following license types are offered:
- Perpetual license
A non-expiring, permanent software license that can be used on a perpetual basis without the need to renew. Each Lenovo part number includes a fixed number of years of Support, Upgrade and Maintenance (SUMS).
- Annual subscription
A software license that is active for a fixed period as defined by the terms of the subscription license, typically yearly. The subscription includes Support, Upgrade and Maintenance (SUMS) for the duration of the license term.
- Concurrent User (CCU)
A method of counting licenses based on active user VMs. If the VM is active and the NVIDIA vGPU software is running, then this counts as one CCU. A vGPU CCU is independent of the connection to the VM.
The following table lists the ordering part numbers and feature codes.
NVIDIA Omniverse Software (OVE)
NVIDIA Omniverse™ Enterprise is an end-to-end collaboration and simulation platform that fundamentally transforms complex design workflows, creating a more harmonious environment for creative teams.
NVIDIA and Lenovo offer a robust, scalable solution for deploying Omniverse Enterprise, accommodating a wide range of professional needs. This document details the critical components, deployment options, and support available, ensuring an efficient and effective Omniverse experience.
Deployment options cater to varying team sizes and workloads. Using Lenovo NVIDIA-Certified Systems™ and Lenovo OVX nodes which are meticulously designed to manage scale and complexity, ensures optimal performance for Omniverse tasks.
Deployment options include:
- Workstations: NVIDIA-Certified Workstations with RTX 6000 Ada GPUs for desktop environments.
- Data Center Solutions: Deployment with Lenovo OVX nodes or NVIDIA-Certified Servers equipped with L40, L40S or A40 GPUs for centralized, high-capacity needs.
NVIDIA Omniverse Enterprise includes the following components and features:
- Platform Components: Kit, Connect, Nucleus, Simulation, RTX Renderer.
- Foundation Applications: USD Composer, USD Presenter.
- Omniverse Extensions: Connect Sample & SDK.
- Integrated Development Environment (IDE)
- Nucleus Configuration: Workstation, Enterprise Nucleus Server (supports up to 8 editors per scene); Self-Service Public Cloud Hosting using Containers.
- Omniverse Farm: Supports batch workloads up to 8 GPUs.
- Enterprise Services: Authentication (SSO/SSL), Navigator Microservice, Large File Transfer, User Accounts SAML/Account Directory.
- User Interface: Workstation & IT Managed Launcher.
- Support: NVIDIA Enterprise Support.
- Deployment Scenarios: Desktop to Data Center: Workstation deployment for building and designing, with options for physical or virtual desktops. For batch tasks, rendering, and SDG workloads that require headless compute, Lenovo OVX nodes are recommended.
The following part numbers are for a subscription license which is active for a fixed period as noted in the description. The license is for a named user which means the license is for named authorized users who may not re-assign or share the license with any other person.
NVIDIA AI Enterprise Software
Lenovo offers the NVIDIA AI Enterprise (NVAIE) cloud-native enterprise software. NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software, optimized, certified, and supported by NVIDIA to run on VMware vSphere and bare-metal with NVIDIA-Certified Systems™. It includes key enabling technologies from NVIDIA for rapid deployment, management, and scaling of AI workloads in the modern hybrid cloud.
NVIDIA AI Enterprise is licensed on a per-GPU basis. NVIDIA AI Enterprise products can be purchased as either a perpetual license with support services, or as an annual or multi-year subscription.
- The perpetual license provides the right to use the NVIDIA AI Enterprise software indefinitely, with no expiration. NVIDIA AI Enterprise with perpetual licenses must be purchased in conjunction with one-year, three-year, or five-year support services. A one-year support service is also available for renewals.
- The subscription offerings are an affordable option to allow IT departments to better manage the flexibility of license volumes. NVIDIA AI Enterprise software products with subscription includes support services for the duration of the software’s subscription license
The features of NVIDIA AI Enterprise Software are listed in the following table.
Note: Maximum 10 concurrent VMs per product license
The following table lists the ordering part numbers and feature codes.
Find more information in the NVIDIA AI Enterprise Sizing Guide.
NVIDIA HPC Compiler Software
Auxiliary power cables
The power cables needed for the H100 SXM GPUs are included with the supported servers.
The H100 PCIe GPU option part number does not ship with auxiliary power cables. Cables are server-specific due to length requirements. For CTO orders, auxiliary power cables are derived by the configurator. For field upgrades, cables will need to be ordered separately as listed in the table below.
Auxiliary power cable needed with the SR650 V3, SR655 V3, SR665 V3, SR665, SR650 V2 |
400mm 16-pin (2x6+4) cable * The option part numbers are for thermal kits and include other components needed to install the GPU. See the SR650 V3 product guide or SR655 V3 product guide or SR665 V3 product guide for details. |
Auxiliary power cable needed with the SR675 V3 |
235mm 16-pin (2x6+4) cable Option: 4X97A84510, ThinkSystem SR675 V3 Supplemental Power Cable for H100 GPU Option Feature: BSD2 SBB: SBB7A65299 Base: SC17B39301 FRU: 03LE554 |
Auxiliary power cable needed with the SR850 V3, SR860 V3 |
200mm 16-pin (2x6+4) cable Option: 4X97A88016, ThinkSystem SR850 V3/SR860 V3 H100 GPU Power Cable Option Kit Feature: BW28 SBB: SBB7A72759 Base: SC17B40604 FRU: 03LF915 |
Auxiliary power cable needed with the SR670 V2 |
215mm 16-pin (2x6+4) cable Option: 4X97A85027, ThinkSystem SR670 V2 H100/L40 GPU Option Power Cable Feature: BRWL SBB: SBB7A66339 Base: SC17B33046 FRU: 03KM845 |
Regulatory approvals
The NVIDIA H100 GPU has the following regulatory approvals:
- RCM
- BSMI
- CE
- FCC
- ICES
- KCC
- cUL, UL
- VCCI
Operating environment
The NVIDIA H100 GPU has the following operating characteristics:
- Ambient temperature
- Operational: 0°C to 50°C (-5°C to 55°C for short term*)
- Storage: -40°C to 75°C
- Relative humidity:
- Operational: 5-85% (5-93% short term*)
- Storage: 5-95%
* A period not more than 96 hours consecutive, not to exceed 15 days per year.
Warranty
One year limited warranty. When installed in a Lenovo server, the GPU assumes the server’s base warranty and any warranty upgrades.
Seller training courses
The following sales training courses are offered for employees and partners (login required). Courses are listed in date order.
-
Partner Technical Webinar - NVIDIA Portfolio
2024-11-06 | 60 minutes | Employees and Partners
DetailsPartner Technical Webinar - NVIDIA Portfolioin this 60-minute replay, Jason Knudsen of NVIDIA presented the NVIDIA Computing Platform. Jason talked about the full portfolio from GPUs to Networking to AI Enterprise and NIMs.
Published: 2024-11-06
Length: 60 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
NVIDIA Data Center GPU Portfolio
2024-09-26 | 11 minutes | Employees and Partners
DetailsNVIDIA Data Center GPU PortfolioThis course equips Lenovo and partner technical sellers with the knowledge to effectively communicate the positioning of NVIDIA's data center GPU portfolio, enhancing your ability to showcase its key advantages to clients.
Published: 2024-09-26
Upon completion of this training, you will be familiar with the following:
• Data Center GPUs for AI and HPC
• Data Center GPUs for Graphics
• GPU comparisons
Length: 11 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Q2 Solutions Launch TruScale GPU Next Generation Management in the AI Era Quick Hit
2024-09-10 | 6 minutes | Employees and Partners
DetailsQ2 Solutions Launch TruScale GPU Next Generation Management in the AI Era Quick HitThis Quick Hit focuses on Lenovo announcing additional ways to help you build, scale, and evolve your customer’s private AI faster for improved ROI with TruScale GPU as a Service, AI-driven systems management, and infrastructure transformation services.
Published: 2024-09-10
Length: 6 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
VTT AI: The NetApp AIPod with Lenovo for NVIDIA OVX
2024-08-13 | 38 minutes | Employees Only
DetailsVTT AI: The NetApp AIPod with Lenovo for NVIDIA OVXAI, for some organizations, is out of reach, due to cost, integration complexity, and time to deployment. Previously, organizations relied on frequently retraining their LLMs with the latest data, a costly and time-consuming process. The NetApp AIPod with Lenovo for NVIDIA OVX combines NVIDIA-Certified OVX Lenovo ThinkSystem SR675 V3 servers with validated NetApp storage to create a converged infrastructure specifically designed for AI workloads. Using this solution, customers will be able to conduct AI RAG and inferencing operations for use cases like chatbots, knowledge management, and object recognition.
Published: 2024-08-13
Topics covered in this VTT session include:
• Where Lenovo fits in the solution
• NetApp AIPod with Lenovo for NVIDIA OVX Solution Overview
• Challenges/pain points that this solution solves for enterprises deploying AI
• Solution value/benefits of the combined NetApp, Lenovo, and NVIDIA OVX-Certified Solution
Length: 38 minutesStart the training:
Employee link: Grow@Lenovo -
Introduction to Artificial Intelligence
2024-08-02 | 11 minutes | Employees and Partners
DetailsIntroduction to Artificial IntelligenceIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!
This NVIDIA course aims to answer questions such as:
• What is AI?
• Why are enterprises so interested in it?
• How does AI happen?
• Why are GPUs so important for it?
• What does a good AI solution look like?
Course Objectives:
By the end of this training, you should be able to:
1. Describe AI on a high level and list a few common enterprise use cases
2. List how enterprises benefit from AI
3. Distinguish between Training and Inference
4. Say how GPUs address known bottlenecks in a typical AI pipeline
5. Tell a customer why NVIDIA’s AI solutions are well-respected in the market
Length: 11 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
GPU Fundamentals
2024-08-02 | 10 minutes | Employees and Partners
DetailsGPU FundamentalsIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”,
please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding.
This NVIDIA course introduces you to two devices that a computer typically uses to process information – the CPU and the GPU. We’ll discuss their differences and look at how the GPU overcomes the limitations of the CPU. We will also talk about the value GPUs bring to modern-day enterprise computing.
Course Objectives:
By the end of this training, you should be able to:
1. Distinguish between serial and parallel processing
2. Explain what a GPU is and what it does at a high level
3. Articulate the value of GPU computing for enterprises
4. List three typical GPU-accelerated workloads and a few uses cases
5. Recommend the appropriate NVIDIA GPU for its corresponding enterprise computing workloads
Length: 10 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Key NVIDIA Use Cases for Industry Verticals
2024-08-02 | 32 minutes | Employees and Partners
DetailsKey NVIDIA Use Cases for Industry VerticalsIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”,
please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding.
In this NVIDIA course, you will learn about key AI use cases driving innovation and change across Automotive, Financial Services, Energy, Healthcare, Higher Education, Manufacturing, Retail and Telco industries.
Course Objectives:
By the end of this training, you should be able to:
1. Discuss common AI use cases across a broad range of industry verticals
2. Explain how NVIDIA’s AI software stack speeds up time to production for AI projects in multiple industry verticals
Length: 32 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Generative AI Overview
2024-08-02 | 17 minutes | Employees and Partners
DetailsGenerative AI OverviewIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!
Since ChatGPTs debut in November of 2022, it has become clear that Generative AI has the potential to revolutionize many aspects of our personal and professional lives. This NVIDIA course aims to answer questions such as:
• What are the Generative AI market trends?
• What is generative AI and how does it work?
Course Objectives:
By the end of this training, you should be able to:
1. Discuss the Generative AI market trends and the challenges in this space with your customers.
2. Explain what Generative AI is and how the technology works to help enterprises to unlock new opportunities for the business.
3. Present a high-level overview of the steps involved in building a Generative AI application.
Length: 17 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Retrieval Augmented Generation
2024-08-02 | 15 minutes | Employees and Partners
DetailsRetrieval Augmented GenerationIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!
In this NVIDIA course, Dave Barry, Senior Solutions Architect, talks about a technique known as Retrieval Augmented Generation (RAG). It is a powerful tool for enhancing the accuracy and reliability of Generative AI models with facts fetched from external sources.
This course requires prior knowledge of Generative AI concepts, such as the difference between model training and inference. Please refer to relevant courses within this curriculum.
Course Objectives:
By the end of this training, you should be able to:
1. Explain the limitations of large language models to customers
2. Articulate the value of RAG to enterprises
3. Demo an NVIDIA RAG workflow with a video
4. Drive TCO conversations using an authentic use case
Length: 15 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
AI Industry Use Cases & Solutions
2024-08-02 | 25 minutes | Employees and Partners
DetailsAI Industry Use Cases & SolutionsIMPORTANT: If you receive the following error message:
Published: 2024-08-02
"There is an issue with this slide content. Please contact your administrator”, please change your VPN location setting and try again. We are actively working on fixing this issue. Thank you for your understanding!
This NVIDIA course aims to answer the question:
• How does NVIDIA bring AI solutions to market with and through the partner ecosystem?
Course Objectives:
By the end of this training, you should be able to:
1. Think of solutions in terms of an industry and use case approach
2. Develop solutions that address the industry-specific challenges (with FSI as the illustrative model)
3. Engage customers with their conversations and advance deals with stakeholder’s concerns in mind
4. Replicate NVIDIA’s best practices and ecosystem engagement strategies appropriately
Length: 25 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Partner Technical Webinar - NVIDIA Smart Spaces
2024-07-24 | 60 minutes | Employees and Partners
DetailsPartner Technical Webinar - NVIDIA Smart SpacesIn this 60-minute replay, Alex Pazos, NVIDIA BDM for Smart Spaces, reviewed the NVIDIA AI for Smart Spaces framework and use cases. Alex reviewed the Metropolus Framework and the Smart Spaces ecosystem. Then he reviewed several use cases including sports stadiums, warehouses, airports, and roadways.
Published: 2024-07-24
Length: 60 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Guidance for Selling NVIDIA Products at Lenovo for ISG
2024-07-01 | 25 minutes | Employees and Partners
DetailsGuidance for Selling NVIDIA Products at Lenovo for ISGThis course gives key talking points about the Lenovo and NVIDIA partnership in the Data Center. Details are included on where to find the products that are included in the partnership and what to do if NVIDIA products are needed that are not included in the partnership. Contact information is included if help is needed in choosing which product is best for your customer. At the end of this session sellers should be able to explain the Lenovo and NVIDIA partnership, describe the products Lenovo can sell through the partnership with NVIDIA, help a customer purchase other NVIDIA product, and get assistance with choosing NVIDIA products to fit customer needs.
Published: 2024-07-01
Length: 25 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning -
Think AI Weekly: Lenovo AI PCs & AI Workstations
2024-05-23 | 60 minutes | Employees Only
DetailsThink AI Weekly: Lenovo AI PCs & AI WorkstationsJoin Mike Leach, Sr. Manager, Workstations Solutions and Pooja Sathe, Director Commercial AI PCs as they discuss why Lenovo AI Developer Workstations and AI PCs are the most powerful, where they fit into the device to cloud ecosystem, and this week’s Microsoft announcement, Copilot+PC
Published: 2024-05-23
Length: 60 minutesStart the training:
Employee link: Grow@Lenovo -
VTT Cloud Architecture: NVIDIA Using Cloud for GPUs and AI
2024-05-22 | 60 minutes | Employees Only
DetailsVTT Cloud Architecture: NVIDIA Using Cloud for GPUs and AIJoin JD Dupont, NVIDIA Head of Americas Sales, Lenovo partnership and Veer Mehta, NVIDIA Solution Architect on an interactive discussion about cloud to edge, designing cloud Solutions with NVIDIA GPUs and minimizing private\hybrid cloud OPEX with GPUs. Discover how you can use what is done at big public cloud providers for your customers. We will also walk through use cases and see a demo you can use to help your customers.
Published: 2024-05-22
Length: 60 minutesStart the training:
Employee link: Grow@Lenovo -
Partner Technical Webinar – NVidia
2023-12-11 | 60 minutes | Employees and Partners
DetailsPartner Technical Webinar – NVidiaIn this 60-minute replay, Brad Davidson of Nvidia will help us recognize AI Trends, and Discuss Industry Verticals Marketing.
Published: 2023-12-11
Length: 60 minutesStart the training:
Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning
Related publications
For more information, refer to these documents:
- ThinkSystem and ThinkAgile GPU Summary:
https://lenovopress.lenovo.com/lp0768-thinksystem-thinkagile-gpu-summary - ServerProven compatibility:
https://serverproven.lenovo.com/ - NVIDIA H100 product page:
https://www.nvidia.com/en-us/data-center/h100/ - NVIDIA Hopper Architecture page
https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/ - ThinkSystem SD665-N V3 product guide
https://lenovopress.lenovo.com/lp1613-thinksystem-sd665-n-v3-server - ThinkSystem SR680a V3 product guide
https://lenovopress.lenovo.com/lp1909-thinksystem-sr680a-v3-server - ThinkSystem SR685a V3 product guide
https://lenovopress.lenovo.com/lp1910-thinksystem-sr685a-v3-server
Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Neptune®
ServerProven®
ThinkAgile®
ThinkSystem®
The following terms are trademarks of other companies:
AMD is a trademark of Advanced Micro Devices, Inc.
Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Linux® is the trademark of Linus Torvalds in the U.S. and other countries.
Microsoft®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Configure and Buy
Full Change History
Changes in the August 9, 2024 update:
- The following GPU is withdrawn from marketing:
- ThinkSystem NVIDIA H100 80GB PCIe Gen5 Passive GPU, 4X67A82257
Changes in the May 2, 2024 update:
- Corrections to the number of CUDA cores and Tensor cores - Technical specifications section
Changes in the April 23, 2024 update:
- Added the following 8-GPU offering:
- ThinkSystem NVIDIA HGX H100 80GB 700W 8-GPU Board, C1HL
Changes in the November 8, 2023 update:
- Clarified that the H100 GPUs in the PCIe form factor include a 5-year subscription to the NVIDIA AI Enterprise software suite - NVIDIA GPU software section
Changes in the September 12, 2023 update:
- The NVIDIA H800 GPUs are now covered in a separate product guide, https://lenovopress.lenovo.com/lp1814
Changes in the August 29, 2023 update:
- The following servers now support the H100 PCIe adapter - Server support section
- SR665, SR650 V2, SR670 V2
- Added the power cables for the new servers - Auxiliary power cables section:
Changes in the August 1, 2023 update:
- New GPU for customers in China, Hong Kong and Macau:
- ThinkSystem NVIDIA H800 80GB PCIe Gen5 Passive GPU, 4X67A86451
First published: May 5, 2023
Course Detail
Employees Only Content
The content in this document with a is only visible to employees who are logged in. Logon using your Lenovo ITcode and password via Lenovo single-signon (SSO).
The author of the document has determined that this content is classified as Lenovo Internal and should not be normally be made available to people who are not employees or contractors. This includes partners, customers, and competitors. The reasons may vary and you should reach out to the authors of the document for clarification, if needed. Be cautious about sharing this content with others as it may contain sensitive information.
Any visitor to the Lenovo Press web site who is not logged on will not be able to see this employee-only content. This content is excluded from search engine indexes and will not appear in any search results.
For all users, including logged-in employees, this employee-only content does not appear in the PDF version of this document.
This functionality is cookie based. The web site will normally remember your login state between browser sessions, however, if you clear cookies at the end of a session or work in an Incognito/Private browser window, then you will need to log in each time.
If you have any questions about this feature of the Lenovo Press web, please email David Watts at dwatts@lenovo.com.