Author
Updated
1 Dec 2023Form Number
LP0098PDF size
41 pages, 919 KBAbstract
ConnectX-4 from Mellanox is a family of high-performance and low-latency Ethernet and InfiniBand adapters. The ConnectX-4 Lx EN adapters are available in 40 Gb and 25 Gb Ethernet speeds and the ConnectX-4 Virtual Protocol Interconnect (VPI) adapters support either InfiniBand or Ethernet.
This product guide provides essential presales information to understand the ConnectX-4 offerings and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about ConnectX-4 network adapters and consider their use in IT solutions.
Change History
Changes in the December 1, 2023 update:
- Updated the table of supported transceivers for the 25GbE adapters - Supported transceivers and cables section
Video Introduction
Introduction
ConnectX-4 from Mellanox is a family of high-performance and low-latency Ethernet and InfiniBand adapters. The ConnectX-4 Lx EN adapters are available in 40 Gb and 25 Gb Ethernet speeds and the ConnectX-4 Virtual Protocol Interconnect (VPI) adapters support either InfiniBand or Ethernet.
These adapters address virtualized infrastructure challenges, delivering best-in-class performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.
The following figure shows the Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter (the standard heat sink has been removed in this photo).
Figure 1. Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter (heatsink removed)
Did you know?
Virtual Protocol Interconnect (VPI) enables standard networking, clustering, storage, and management protocols to seamlessly operate over any converged network by leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over InfiniBand (EoIB) as well as RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.
Part number information
The following table shows the part numbers for adapters for ThinkSystem, System x and NeXtScale servers.
* 01GR250 and 4XC7A08249 are identical in function with the exception that 4XC7A08249 supports Secure Firmware Update
† MCX4111A-ACAT and MCX4121A-ACAT are the PCIe version of these ML2 form-factor adapters
The following table shows the part numbers for adapters supported on ThinkServer systems.
The part numbers include the following:
- One Mellanox adapter
- Low-profile (2U) and full-height (3U) adapter brackets
- Documentation
The following figure shows the ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port OCP Ethernet Adapter.
Figure 2. ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port OCP Ethernet Adapter
Supported transceivers and cables
This section lists the supported transceivers and cables:
EDR InfiniBand adapters
The Mellanox ConnectX-4 100GbE/EDR InfiniBand adapters support the InfiniBand cables listed in the following table.
The following table lists the supported fiber optic cables.
The following table lists the direct-attach copper (DAC) cables.
FDR InfiniBand adapters
The Mellanox ConnectX-4 FDR InfiniBand adapters supports the cables listed in the following table.
100 Gb Ethernet adapters
The Mellanox ConnectX-4 100GbE/EDR IB Adapters also support the 100 Gb Ethernet SFP+ optical transceivers and DAC cables listed in the following table.
* 7G17A03539 also supports 40Gb when installed in a Mellanox adapter.
40 Gb Ethernet adapter
The Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter supports the 40Gb DAC cables, transceiver, and optical cables that are listed in the following table.
In addition, the Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter also supports the 40Gb-to-10Gb QSFP to SFP+ adapter and 10Gb DAC cables and optics as shown in the following table.
The following table lists the supported transceivers.
The following table lists the supported cables.
25Gb transceivers: When installed in this 25Gb Ethernet adapter, 25Gb transceivers are designed to operate at either 25 Gb/s or 10 Gb/s speeds as listed in the description of the transceiver, however the speed also depends on the negotiation with the connected switch. In most configurations, this negotiation is automatic, however in some configurations you may have to manually set the link speed or FEC mode.
In addition, the 25Gb adapters also can share a connection to a 100 Gb switch using a 4:1 breakout cable. Supported breakout cables (fiber optic and AOC) are listed in the following table.
In addition, the 25Gb adapters also support the following 10 GbE AOC/DAC cables.
The following figure shows the Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter.
Figure 3. Mellanox ConnectX-4 Lx 10/25GbE SFP28 1-port ML2 Adapter (heatsink removed)
Features
The ConnectX-4 family of adapters offer a number of performance features, including the following:
- ConnectX-4 Lx Ethernet adapters
The ConnectX-4 Lx adapters discussed in this product guide offer a high performance Ethernet adapter solution for Ethernet speeds up to 40 Gb/s, enabling seamless networking, clustering, or storage. The Lx adapters reduce application runtime, and offer the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
- ConnectX-4 100 Gb Ethernet / EDR InfiniBand
ConnectX-4 with Virtual Protocol Interconnect (VPI) offers the highest throughput VPI adapter, supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack.
- I/O Virtualization
ConnectX-4 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.
- Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 Lx effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol header as well as offloads TCP stateless activities on the encapsulated packet.
- RDMA over Converged Ethernet (RoCE)
ConnectX-4 adapters supports RoCE specifications delivering low-latency and high-performance over Ethernet networks. The ConnectX-4 VPI adapter also supports IBTA RDMA (Remote Data Memory Access) for InfiniBand network performance. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
- Mellanox PeerDirect
PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.
- Secure Firmware Update
Secure firmware update feature verifies digital signatures of new binaries to ensure they are officially approved versions before installing and activating them. In addition to signature verification, secure FW also checks that the binary is designated to the same device model, that the new firmware is also secured, and that the new FW version is not included in a forbidden versions blacklist. The firmware rejects binaries that do not match the verification criteria. This feature is supported only on Mellanox ConnectX-4 Lx adapters 4XC7A08249 and 4XC7A08246.
The following figure shows the Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter.
Figure 4. Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter (heatsink removed)
Technical specifications
PCIe 3.0 host interface:
- ConnectX-4 Lx Ethernet adapters: PCIe 3.0 x8 interface
- ConnectX-4 EDR InfiniBand / 100 Gb Ethernet adapter: PCIe 3.0 x16 interface
- Support for MSI/MSI-X mechanisms
External connectors:
- 25 Gb PCIe and ML2 adapters: SFP28
- 40 Gb and 100 Gb adapters: QSFP28
Ethernet standards (all adapters, except where noted):
- 25G Ethernet Consortium (25 Gb)
- 25G Ethernet Consortium (50 Gb) (100Gb/EDR adapter only)
- IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet (100Gb/EDR adapter only)
- IEEE 802.3ba 40 Gigabit Ethernet (100Gb/EDR and 40Gb adapters only)
- IEEE 802.3by 25 Gigabit Ethernet
- IEEE 802.3ae 10 Gigabit Ethernet
- IEEE 802.3az Energy Efficient Ethernet
- IEEE 802.3ap based auto-negotiation and KR startup
- Proprietary Ethernet protocols (20/40GBASE-R2) (40Gb adapter only)
- IEEE 802.3ad, 802.1AX Link Aggregation
- IEEE 802.1Q, 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN) – Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
- IPv4 (RFQ 791)
- IPv6 (RFC 2460)
InfiniBand protocols (VPI Infiniband adapters only):
- InfiniBand: IBTA v1.3 Auto-Negotiation
- 1X/2X/4X SDR (2.5 Gb/s per lane)
- DDR (5 Gb/s per lane)
- QDR (10 Gb/s per lane)
- FDR10 (10.3125 Gb/s per lane)
- FDR (14.0625 Gb/s per lane) port
- EDR (25.78125 Gb/s per lane)
InfiniBand features (VPI Infiniband adapters only)
- RDMA, Send/Receive semantics
- Hardware-based congestion control
- Atomic operations
- 16 million I/O channels
- 256 to 4Kbyte MTU, 2Gbyte messages
Note: The feature of 8 virtual lanes with VL15 is currently not supported
Enhanced Features
- Hardware-based reliable transport
- Collective operations offloads
- Vector collective operations offloads
- PeerDirect RDMA (GPUDirect communication acceleration)
- 64/66 encoding
- Extended Reliable Connected transport (XRC)
- Dynamically Connected transport (DCT)
- Enhanced Atomic operations
- Advanced memory mapping support, allowing user mode registration and remapping of memory (UMR)
- On demand paging (ODP) – registration free RDMA memory access
Storage Offloads
- RAID offload - erasure coding (Reed-Solomon) offload
- T10 DIF - Signature handover operation at wire speed, for ingress and egress traffic (100Gb/EDR adapter only)
Overlay Networks
- Stateless offloads for overlay networks and tunneling protocols
- Hardware offload of encapsulation and decapsulation of NVGRE and VXLAN overlay networks
Hardware-Based I/O Virtualization
- Single Root IOV (SR-IOV)
- Multi-function per port
- Address translation and protection
- Multiple queues per virtual machine
- Enhanced QoS for vNICs
- VMware NetQueue support
- Windows Hyper-V Virtual Machine Queue (VMQ)
Virtualization
- SR-IOV: Up to 256 Virtual Functions (VFs), 1 Physical Function (PF) per port
- SR-IOV on every Physical Function
- 1K ingress and egress QoS levels
- Guaranteed QoS for VMs
Note: NPAR (NIC partitioning) is currently not supported.
CPU Offloads
- RDMA over Converged Ethernet (RoCE)
- TCP/UDP/IP stateless offload
- LSO, LRO, checksum offload
- RSS (can be done on encapsulated packet), TSS, HDS, VLAN insertion / stripping, Receive flow steering
- Intelligent interrupt coalescence
Remote Boot
- Remote boot over InfiniBand (VPI Infiniband adapters only)
- Remote boot over Ethernet
- Remote boot over iSCSI
- PXE and UEFI
Protocol Support
- OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI
- Platform MPI, UPC, Open SHMEM
- TCP/UDP, MPLS, VxLAN, NVGRE, GENEVE
- EoIB, IPoIB, SDP, RDS (VPI Infiniband adapters only)
- iSER, NFS RDMA, SMB Direct
- uDAPL
Management and Control Interfaces
- NC-SI (25Gb ML2 adapter only)
- PLDM over MCTP over PCIe
- SDN management interface for managing the eSwitch
Server support - ThinkSystem
The following tables list the ThinkSystem servers that are compatible.
Server support - System x
The following tables list the System x and dense servers that are compatible.
Support for System x and dense servers with Xeon E5 v4 and E3 v5 processors
Support for System x and dense servers with Intel E5 v3 and E3 v3 processors
Support for System x servers with Intel Xeon v2 processors
The following figure shows the Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter.
Figure 5. Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter (heatsink removed)
Server support - ThinkServer
The following tables list the ThinkServer systems that are compatible.
Support for sd350: The ThinkServer sd350 is listed in Table 6.
Support for ThinkServer Generation 5 servers with E5 v4 and E3 v5/v6 processors
Support for ThinkServer Generation 5 servers with E5 v3 and E3 v3 processors
Operating system support
The Mellanox ConnectX-4 adapters support the following operating systems:
- Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter, 01GR250
- ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE 2-port PCIe Ethernet Adapter, 4XC7A08249
- ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port OCP Ethernet Adapter, 4XC7A08246
- Mellanox ConnectX-4 Lx 10/25GbE SFP28 1-port ML2 Adapter, 00MN990
- ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-Port ML2 Ethernet Adapter, 7ZT7A00507
- Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter, 00MM950
- ThinkSystem Mellanox ConnectX-4 PCIe FDR 1-Port QSFP VPI Adapter, 7XC7A05524
- ThinkSystem Mellanox ConnectX-4 PCIe FDR 2-Port QSFP VPI Adapter, 7ZT7A00500
- Mellanox ConnectX-4 2x100GbE/EDR IB QSFP28 VPI Adapter, 00MM960
- Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter, 00KH924
InfiniBand mode not supported with VMware: With VMware, these adapters are supported only in Ethernet mode. InfiniBand is not supported.
Tip: These tables are automatically generated based on data from Lenovo ServerProven.
1 [in box driver support only]
1 The OS is not supported with EPYC 7003 processors.
2 ISG will not sell/preload this OS, but compatibility and cert only.
1 The OS is not supported with EPYC 7003 processors.
2 ISG will not sell/preload this OS, but compatibility and cert only.
1 [in box driver support only]
1 [in box driver support only]
1 Need out of box driver to support infiniband feature
1 Need out of box driver to support infiniband feature
1 Need out of box driver to support infiniband feature
2 [in box driver support only]
1 Need out of box driver to support infiniband feature
Regulatory approvals
The adapters meet the following regulatory standards:
- Safety: CB, cTUVus, CE
- EMC: CE, FCC, VCCI, ICES, RCM
- RoHS: RoHS-R6
Operating environment
Power consumption:
Maximum power through external connectors:
- 25Gb adapters: 1.5 W
- 40Gb adapter: 1.5 W
- 100Gb adapter: 3.5 W
Temperature:
- Operational 0°C to 55°C
- Non-operational -40°C to 70°C
Humidity: 90% relative humidity
Warranty
One year limited warranty. When installed in a Lenovo server, these cards assume the server’s base warranty and any warranty upgrades.
Related publications
For more information, refer to these documents:
- Networking Options for ThinkSystem Servers:
https://lenovopress.com/lp0765-networking-options-for-thinksystem-servers - ServerProven compatibility
http://www.lenovo.com/us/en/serverproven - Mellanox User Manuals:
- ConnectX-4 Lx Ethernet: https://docs.nvidia.com/networking/display/CX4LxEN
- ConnectX-4 VPI: https://docs.nvidia.com/networking/display/ConnectX4IB
- Mellanox page for ConnectX-4 Lx adapter:
https://www.nvidia.com/en-us/networking/ethernet/connectx-4-lx/
Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ServerProven®
System x®
ThinkServer®
ThinkSystem®
The following terms are trademarks of other companies:
AMD is a trademark of Advanced Micro Devices, Inc.
Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Linux® is the trademark of Linus Torvalds in the U.S. and other countries.
Microsoft®, Hyper-V®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Configure and Buy
Full Change History
Changes in the December 1, 2023 update:
- Updated the table of supported transceivers for the 25GbE adapters - Supported transceivers and cables section
Changes in the October 31, 2023 update:
- Added the following transceiver - Supported transceivers and cables section:
- ThinkSystem Finisar Dual Rate 10G/25G SR SFP28 Transceiver, 4TC7A88638
Changes in the September 17, 2023 update:
- The following adapters are withdrawn:
- Mellanox ConnectX-4 Lx 1x40GbE QSFP+ Adapter, 00MM950
- Mellanox ConnectX-4 Lx 10/25GbE SFP28 1-port ML2 Adapter, 00MN990
- Withdrawn cables are now hidden; click Show Withdrawn Products to view them - Supported transceivers and cables section
Changes in the December 15, 2022 update:
- Added the following transceiver - Supported transceivers and cables section:
- ThinkSystem Accelink 40G SR4 QSFP+ Ethernet transceiver, 4TC7A85336
Changes in the November 8, 2022 update:
- Added the following breakout cables - Supported transceivers and cables section:
- Lenovo 1.5m 100G to 4x25G Breakout SFP28 Breakout DAC Cable, 4Z57A85043
- Lenovo 2m 100G to 4x25G Breakout SFP28 Breakout DAC Cable, 4Z57A85044
Changes in the January 16, 2021 update:
- 100Gb transceiver 7G17A03539 also supports 40Gb when installed in a Mellanox adapter. - Supported transceivers and cables section
Changes in the December 14, 2021 update:
- Added support for the following transceivers - Supported transceivers and cables section:
- ThinkSystem Accelink 10G SR SFP+ Ethernet transceiver, 4TC7A78615
- Lenovo 25Gb SR SFP28 Ethernet Transceiver, 4M27A67041
Changes in the October 27, 2021 update:
- Indicated transceivers and cables that are now withdrawn from marketing - Supported transceivers and cables section
Changes in the October 20, 2021 update:
- Added a note that the 25GbE transceivers can operate at 25 Gb/s or 10 Gb/s speeds when installed in a 25GbE adapter - Supported transceivers and cables section
Changes in the August 20, 2021 update:
- The adapters only support 1 physical function (PF) per port - Technical specifications section
Changes in the June 16, 2021 update:
- Added 100Gb transceiver - Supported transceivers and cables section:
- Lenovo 100Gb SR4 QSFP28 Ethernet Transceiver, 4M27A67042
Changes in the June 6, 2021 update:
- Clarified the cable tables to indicate InfiniBand or Ethernet support - Supported cables and transceivers section
Changes in the February 28, 2021 update:
- The server support tables are now automatically updated - Server support section
Changes in the January 31, 2021 update:
- The following adapter is withdrawn from marketing:
- Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter, 00KH924
Changes in the October 13, 2020 update:
- Added the SR850 V2 and SR860 V2 servers - Server support section (no support)
Changes in the July 10, 2020 update:
- Added OS support tables for these adapters:
Changes in the March 22, 2020 update:
- The LC-LC 0M3 Fiber Optic Cables support 25GbE when used with a 25GbE adapter and the Lenovo 25GBase-SR SFP28 Transceiver - 25 Gb Ethernet adapter cables section
Changes in the August 20, 2019 update:
- Added ThinkSystem SE350 - Server support section
Changes in the August 7, 2019 update:
- New ConnectX-4 Lx EN adapters:
- ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter, 4XC7A08249
- ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port OCP Ethernet Adapter, 4XC7A08246
Changes in the April 16, 2019 update:
- The Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter, 01GR250, is now supported in the ThinkSystem ST550
Changes in the March 20, 2019 update:
- Operating system support information is now listed as a set of tables which are automatically generated from ServerProven data - Operating system support section
Changes in the December 10, 2018 update:
- Added SFP+ 10Gb Active Optical Cables to the table of supported cables for 25 Gb Ethernet adapters - Table 12
Changes in the November 5, 2018 update:
- Added new servers: ThinkSystem ST50, ST250, SR150 and SR250
- Updated the list of supported operating systems
Changes in the September 22, 2018 update:
- Updated the cables and transeivers for the 25Gb adapters
Changes in the September 7, 2018 update:
- New product names for these 25 GbE adapters:
- 1-port ML2:
- Was: Mellanox ConnectX-4 Lx ML2 1x25GbE SFP28 Adapter
- Now:Mellanox ConnectX-4 Lx 10/25GbE SFP28 1-port ML2 Adapter
- 2-port ML2:
- Was: ThinkSystem Mellanox ConnectX-4 Lx ML2 25Gb 2-Port SFP28 Ethernet Adapter
- Now: ThinkSystem Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-Port ML2 Ethernet Adapter
- 2-port PCIe:
- Was: Mellanox ConnectX-4 Lx 2x25GbE SFP28 Adapter
- Now: Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter
- 1-port ML2:
- Added the SR670 to the ThinkSystem server support section
- Updated the list of supported operating systems
- The ThinkServer adapter is withdrawn from marketing - Table 2
Changes in the September 4, 2018 update:
- Added the ThinkSystem SR670 server - Server support section
Changes in the July 17, 2018 update:
- New 1m Active Optical Cable for 100G adapters:
- Lenovo 1m 100G QSFP28 Active Optical Cable, 4Z57A10844
Changes in the April 9, 2018 update:
- The 25 Gb transceiver also supports 10 Gb when used with a Mellanox adapter - Table 10
Changes in the February 27, 2018 update:
- The Mellanox ConnectX-4 1x100GbE/EDR IB QSFP28 VPI Adapter is now supported in the SR860 and SR950 - ThinkSystem server support section
- The adapters support Windows Hyper-V Virtual Machine Queue (VMQ)
Changes in the January 30, 2018 update:
- New 1m 100G QSFP28 Active Optical Cable, 4Z57A10844 - Supported cables and transceivers section
Changes in the November 21, 2017 update:
- New FDR InfiniBand / 40 Gb Ethernet adapter:
- ThinkSystem Mellanox ConnectX-4 PCIe FDR 1-Port QSFP VPI Adapter, 7XC7A05524
- Updated the ThinkSystem server support table
Changes in the October 30, 2017 update:
- Updated the EDR InfiniBand cables section
Changes in the October 29, 2017 update:
Changes in the August 15, 2017 update:
- Clarified that with VMware, these adapters are supported only in Ethernet mode. InfiniBand is not supported.
Changes in the July 11, 2017 update:
- Added ThinkSystem part numbers
- Added ThinkSystem server support
- Added the supported 25 GbE transceiver and DAC cables
Changes in the May 23, 2017 update:
- Added new adapters:
- For System x & dense servers: Mellanox ConnectX-4 EDR IB VPI Single-port x16 PCIe 3.0 HCA, 00KH924
- For ThinkServer: ConnectX-4 Lx PCIe 25Gb 2 Port SFP28 Ethernet Adapter by Mellanox, 4XC0G88861
Changes in the November 10, 2016 update:
- Added Tables 3 and 4, cable part numbers for the 1x40GbE QSFP28 Adapter - Supported cables and transceivers section
First published: 14 June 2016
Course Detail
Employees Only Content
The content in this document with a is only visible to employees who are logged in. Logon using your Lenovo ITcode and password via Lenovo single-signon (SSO).
The owner of the document has determined that this content is classified as Lenovo Internal and should not be normally be made available to people who are not employees or contractors. This includes partners, customers, and competitors. The reasons may vary and you should reach out to the authors of the document for clarification, if needed. Be cautious about sharing this content with others as it may contain sensitive information.
Any visitor to the Lenovo Press web site who is not logged on will not be able to see this employee-only content. This content is excluded from search engine indexes and will not appear in any search results.
For all users, including logged-in employees, this employee-only content does not appear in the PDF version of this document.
This functionality is cookie based. The web site will normally remember your login state between browser sessions, however, if you clear cookies at the end of a session or work in an Incognito/Private browser window, then you will need to log in each time.
If you have any questions about this feature of the Lenovo Press web, please email David Watts at dwatts@lenovo.com.