skip to main content

Unleashing AI Excellence: Lenovo ThinkSystem Dominates MLPerf v5.1 with Exceptional Inference Performance

Article

Home
Top
Published
11 Nov 2025
Form Number
LP2330
PDF size
6 pages, 121 KB

Abstract

MLCommons® announced new results for its industry-standard MLPerf® Inference v5.1 benchmark suite, which provides insights into machine learning (ML) system performance benchmarking. This article explores Lenovo’s contributions to MLPerf inferencing 5.1. The combination of recent hardware and software advances optimized for generative AI have led to dramatic performance improvements over the past year.

Introduction

Artificial intelligence is transforming how enterprises innovate, compete, and grow—but scaling AI requires more than deploying a model. It demands enterprise-ready, sustainable, and outcome-driven infrastructure. Lenovo continues to lead the way, delivering Smarter AI for All through Lenovo Hybrid AI Advantage™, the industry’s most comprehensive hybrid AI platform.

The latest MLPerf® Inference v5.1 results from MLCommons® underscore Lenovo’s leadership in AI infrastructure. Lenovo ThinkSystem servers—powered by Intel® Xeon® 6 processors and NVIDIA GPUs—achieved exceptional performance across multiple inference scenarios, including generative AI, recommendation engines, and real-time applications. These systems set benchmarks for low latency, high throughput, and scalability, earning Top 3 rankings and 1st place in fifteen categories.

Whether you’re a retailer optimizing ad ranking, a healthcare provider accelerating diagnostics, or a tech innovator deploying complex language models, Lenovo ThinkSystem servers deliver faster time to value and real-world business outcomes. With Lenovo and NVIDIA, organizations can build, train, and scale AI confidently—unlocking innovation and driving the future of smarter AI.

Lenovo’s achievements

Along with of twenty-seven participating organizations including: AMD, Azure, Broadcom, Cisco, Dell, Google, Intel and HPE, Lenovo results are impressive:

Across numerous MLPerf Inference tests, these results highlight the capabilities of Lenovo ThinkSystem servers in various AI inference scenarios. Whether you're developing cutting-edge AI models or processing large datasets, Lenovo's server configurations provide the scalability and performance you need to drive innovation forward.

The following table provides a breakdown of our results for Datacenter and Edge.

Table 1. Lenovo’s achievements
System Total Categories Submitted First Place Finishes Second Place Finishes Third Place Finishes
ThinkEdge SE100 12 12 0 0
ThinkSystem SR680a V3 5 2 0 1
ThinkSystem SR780a V3 (Intel) 9 1 2 2

Our benchmarks display Lenovo’s consistent improvement to achieve best-in-class results for our customers. These key results show how powerful our systems are in those specific categories:

  • ThinkSystem SR680a V3 (8x B200-SXM-180GB, TensorRT)
    • llama2-70b-99 server (7/10)
    • llama2-70b-99 offline (1/10)
    • llama2-70b-99.9 server (3/11)
    • llama2-70b-99.9 offline (1/11)
    • whisper (7/14)
  • ThinkSystem SR780a V3 (8x B200-SXM-180GB, TensorRT)
    • 1-8b server (4/9)
    • 1-8b offline (1/9)
    • stable-diffusion-xl server (4/8)
    • stable-diffusion-xl offline (6/9)
    • 1-405b server (2/13)
    • 1-405b offline (9/13)
    • 1-405b interactive (2/10)
    • rgat (4/5)
    • whisper (5/14)
  • ThinkEdge SE100 (1x RTX-2000E-Ada-16GB)
    • resnet50 singlestream (1/1 - no competitors)
    • resnet50 multistream (1/1 - no competitors)
    • resnet50 offline (1/1 - no competitors)
    • retinanet singlestream (1/1 - no competitors)
    • retinanet multistream (1/1 - no competitors)
    • retinanet offline (1/1 - no competitors)
    • 3d-unet singlestream (1/1 - no competitors)
    • 3d-unet offline (1/1 - no competitors)
    • 3d-unet-99.9 singlestream (1/1 - no competitors)
    • 3d-unet-99.9 offline (1/1 - no competitors)
    • stable-diffusion-xl singlestream (1/1 - no competitors)
    • stable-diffusion-xl offline (1/1 - no competitors)

Conclusion

The insights from the latest MLPerf benchmarks are critical for stakeholders in the generative AI and machine learning ecosystem, from system architects to application developers. They provide a quantitative foundation for hardware selection and optimization, crucial for deploying scalable and efficient AI/ML systems. Future developments in hardware and software are expected to further influence these benchmarks, continuing the cycle of innovation and evaluation in the field of machine learning.

Professionals in the field are encouraged to consider these results in their future hardware procurement and system design strategies. For further discussion or consultation on leveraging these insights in specific use cases, engage with our expert team at aidiscover@lenovo.com.

For more information

For more information, see the following resources:

Explore Lenovo MLPerf Results:

MLCommons®, the open engineering consortium and leading force behind MLPerf, has now released new results for MLPerf benchmark suites:

Author

David Ellison is the Chief Data Scientist for Lenovo ISG. Through Lenovo’s US and European AI Discover Centers, he leads a team that uses cutting-edge AI techniques to deliver solutions for external customers while internally supporting the overall AI strategy for the Worldwide Infrastructure Solutions Group. Before joining Lenovo, he ran an international scientific analysis and equipment company and worked as a Data Scientist for the US Postal Service. Previous to that, he received a PhD in Biomedical Engineering from Johns Hopkins University. He has numerous publications in top tier journals including two in the Proceedings of the National Academy of the Sciences.

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Lenovo Hybrid AI Advantage
ThinkEdge®
ThinkSystem®

The following terms are trademarks of other companies:

AMD is a trademark of Advanced Micro Devices, Inc.

Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Azure® is a trademark of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.