Author
Updated
6 Dec 2023Form Number
LP1692PDF size
13 pages, 204 KBAbstract
The ThinkSystem NVIDIA ConnectX-7 NDR InfiniBand OSFP400 Adapters offer 400 Gb/s InfiniBand connectivity for high-performance connectivity when running HPC, cloud, storage and machine learning applications.
This product guide provides essential presales information to understand the ConnectX-7 NDR adapters and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the ConnectX-7 NDR adapters and consider their use IT solutions.
Change History
Changes in the December 6, 2023 update:
- Removed the restriction that these adapters were only orderable in DCSC using the HPC & AI mode of the configurator. Now orderable via the General Purpose mode.
Introduction
The ThinkSystem NVIDIA ConnectX-7 NDR InfiniBand OSFP400 Adapters offer 400 Gb/s InfiniBand connectivity for high-performance connectivity when running HPC, cloud, storage and machine learning applications.
The following figure shows the ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (the standard heat sink has been removed in this photo).
Figure 1. ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter
Did you know?
The ConnectX-7 NDR adapters are optimized to deliver accelerated networking for modern cloud, artificial intelligence, and traditional enterprise workloads. ConnectX-7 provides a broad set of software-defined, hardware-accelerated networking, storage, and security capabilities which enable organizations to modernize and secure their IT infrastructures.
Part number information
The following table shows the part numbers for the adapters.
Part number |
Feature code |
NVIDIA equivalent |
Description |
---|---|---|---|
Air-cooled adapter | |||
4XC7A80289 | BQ1N | MCX75510AAS-NEAT | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter |
Water-cooled adapters | |||
4XC7A86671 | BKSN | MCX75510AAS-NEAT-DWC | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC |
4XC7A86670 | BKSP | MCX75510AAS-NEAT-DWC | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC |
Non-DWC adapter part numbers includes the following:
- One NVIDIA adapter with full-height (3U) adapter bracket attached
- Low-profile (2U) adapter bracket
- Documentation
Supported transceivers and cables
The ConnectX-7 NDR adapters have an empty OSFP400 cage for connectivity. The following table lists the supported transceivers.
The following table lists the supported transceivers.
Part number | Feature code | Description |
---|---|---|
400Gb Transceivers | ||
4TC7A81826 | BQJT | ThinkSystem NDR OSFP400 IB Multi Mode Solo-Transceiver |
The following table lists the supported fiber optic cables and Active Optical Cables.
Part number | Feature code | Description |
---|---|---|
Mellanox NDR Multi Mode Fibre Optical Cables | ||
4X97A81748 | BQJN | Lenovo 3m NVIDIA NDR Multi Mode Optical Cable |
4X97A81749 | BQJP | Lenovo 5m NVIDIA NDR Multi Mode Optical Cable |
4X97A81750 | BQJQ | Lenovo 7m NVIDIA NDR Multi Mode MPO12 APC Optical Cable |
4X97A81751 | BQJR | Lenovo 10m NVIDIA NDR Multi Mode Optical Cable |
4X97A81752 | BQJS | Lenovo 20m NVIDIA NDR Multi Mode Optical Cable |
4X97A85349 | BSN6 | Lenovo 30m NVIDIA NDR Multi Mode MPO12 APC Optical Cable |
The following table lists the supported direct-attach copper (DAC) cables.
Part number | Feature code | Description |
---|---|---|
Mellanox NDRx2 OSFP800 to 2x NDR OSFP400 Splitter Copper Cables | ||
4X97A81827 | BQJV | Lenovo 1m NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable |
4X97A81828 | BQJW | Lenovo 1.5m NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable |
4X97A81829 | BQJX | Lenovo 2m NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable |
4X97A81830 | BQJY | Lenovo 3m NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable |
Features
The adapters have the following features:
- Accelerated Networking and Security
ConnectX-7 provides a broad set of software-defined, hardware-accelerated networking, storage, and security capabilities which enable organizations to modernize and secure their IT infrastructures. Moreover, ConnectX-7 empowers agile and high-performance solutions from edge to core data centers to clouds, all while enhancing network security and reducing the total cost of ownership.
- Accelerate Data-Driven Scientific Computing
ConnectX-7 provides ultra-low latency, extreme throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature- rich technology needed for today’s modern scientific computing workloads.
- Accelerate Software- Defined Networking
NVIDIA ASAP2 technology accelerates software-defined networking, delivering line-rate performance with no CPU penalty.
- Enhance Storage Performance
ConnectX-7 enables high- performance and efficient data storage by leveraging RDMA/RoCE, GPUDirect Storage, and hardware-based NVMe-oF offload engines.
- SharedIO (4XC7A86670 only)
SharedIO (Shared I/O or Multi-Host) is an implementation of Sockets Direct that is offered in Lenovo direct water-cooled (DWC) servers such as the SD650 V3, where there are two server nodes per DWC tray. In this implementation, the ConnectX-7 adapter is installed in a PCIe slot in one node and the adapter is connected via a cable to a PCIe connector on the second node. The result is that the two nodes share the network connection of the adapter with significant savings both in the cost of the adapters but also the cost of switch ports.
Technical specifications
The adapters have the following technical specifications:
- Networking interfaces
- One OSPF400 cage for 400 Gb/s connectivity
- InfiniBand connectivity
- Support for InfiniBand NDR
- InfiniBand Trade Association Spec 1.5 compliant
- RDMA, send/receive semantics
- 16 million input/output (IO) channels
- 256 to 4Kbyte maximum transmission unit (MTU), 2Gbyte messages
- Enhanced InfiniBand Networking
- Hardware-based reliable transport
- Extended Reliable Connected (XRC)
- Dynamically Connected Transport (DCT)
- GPUDirect RDMA
- GPUDirect Storage
- Adaptive routing support
- Enhanced atomic operations
- Advanced memory mapping, allowing user mode registration (UMR)
- On-demand paging (ODP), including registration-free RDMA memory access
- Enhanced congestion control
- Burst buffer offload
- Single root IO virtualization (SR-IOV)
- Optimized for HPC software libraries including: NVIDIA HPC-X, UCX, UCC, NCCL, OpenMPI, MVAPICH, MPICH, OpenSHMEM, PGAS
- Collective operations offloads
- Support for NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)
- Rendezvous protocol offload
- In-network on-board memory
- Storage Accelerations
- Block-level encryption: XTS-AES 256/512-bit key
- NVMe over Fabrics (NVMe-oF)
- NVMe over TCP (NVMe/TCP)
- T10 Data Integrity Field (T10-DIF) signature handover
- SRP, iSER, NFS over RDMA, SMB Direct
- Management and Control
- NC-SI, MCTP over SMBus, and MCTP over PCIe
- PLDM for Monitor and Control DSP0248
- PLDM for Firmware Update DSP0267
- PLDM for Redfish Device Enablement DSP0218
- PLDM for FRU DSP0257
- SPDM DSP0274
- Serial Peripheral Interface (SPI) to flash
- JTAG IEEE 1149.1 and IEEE 1149.6
- Remote Boot
- Remote boot over InfiniBand
- Remote boot over iSCSI
- UEFI
- PXE
- Cybersecurity
- Platform security: secure boot with hardware root-of-trust, secure firmware update, flash encryption, and device attestation
- *}PCI Express Interface
- PCIe Gen 5.0 x16 host interface
- Support for PCIe bifurcation
- SharedIO (NVIDIA Multi-Host) supports connection of up to two hosts (on supported water-cooled nodes only)
- Transaction layer packet (TLP) processing hints (TPH)
- PCIe switch Downstream Port Containment (DPC)
- Support for MSI/MSI-X mechanisms
- Advanced error reporting (AER)
- Access Control Service (ACS) for peer-to-peer secure communication
- Process Address Space ID (PASID)
- Address translation services (ATS)
- Support for SR-IOV
NVIDIA Unified Fabric Manager
NVIDIA Unified Fabric Manager (UFM) is InfiniBand networking management software that combines enhanced, real-time network telemetry with fabric visibility and control to support scale-out InfiniBand data centers.
The two offerings available from Lenovo are as follows:
- UFM Telemetry for Real-Time Monitoring
The UFM Telemetry platform provides network validation tools to monitor network performance and conditions, capturing and streaming rich real-time network telemetry information, application workload usage, and system configuration to an on-premises or cloud-based database for further analysis.
- UFM Enterprise for Fabric Visibility and Control
The UFM Enterprise platform combines the benefits of UFM Telemetry with enhanced network monitoring and management. It performs automated network discovery and provisioning, traffic monitoring, and congestion discovery. It also enables job schedule provisioning and integrates with industry-leading job schedulers and cloud and cluster managers, including Slurm and Platform Load Sharing Facility (LSF).
The following table lists the subscription licenses available from Lenovo.
Part number | Feature code (7S02CTO1WW) |
Description |
---|---|---|
UFM Telemetry | ||
7S02003HWW | S88D | UFM Telemetry 1-year License and Gold-Support for Lenovo clusters. Per node. |
7S02003JWW | S88E | UFM Telemetry 3-year License and Gold-Support for Lenovo clusters. Per node. |
7S02003KWW | S88F | UFM Telemetry 5-year License and Gold-Support for Lenovo clusters. Per node. |
UFM Enterprise | ||
7S02003LWW | S88G | UFM Enterprise 1-year License and Gold-Support for Lenovo clusters. Per node. |
7S02003MWW | S88H | UFM Enterprise 3-year License and Gold-Support for Lenovo clusters. Per node. |
7S02003NWW | S88J | UFM Enterprise 5-year License and Gold-Support for Lenovo clusters. Per node. |
For more information, see the following web page:
https://www.nvidia.com/en-us/networking/infiniband/ufm/
Server support
The following tables list the ThinkSystem servers that are compatible.
Part Number | Description | 2S AMD V3 | 2S Intel V3 | 4S 8S Intel V3 | Multi Node | GPU Rich | 1S V3 | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SR635 V3 (7D9H / 7D9G) |
SR655 V3 (7D9F / 7D9E) |
SR645 V3 (7D9D / 7D9C) |
SR665 V3 (7D9B / 7D9A) |
ST650 V3 (7D7B / 7D7A) |
SR630 V3 (7D72 / 7D73) |
SR650 V3 (7D75 / 7D76) |
SR850 V3 (7D97 / 7D96) |
SR860 V3 (7D94 / 7D93) |
SR950 V3 (7DC5 / 7DC4) |
SD535 V3 (7DD8 / 7DD1) |
SD530 V3 (7DDA / 7DD3) |
SD550 V3 (7DD9 / 7DD2) |
SR670 V2 (7Z22 / 7Z23) |
SR675 V3 (7D9Q / 7D9R) |
SR680a V3 (7DHE) |
SR685a V3 (7DHC) |
ST250 V3 (7DCF / 7DCE) |
SR250 V3 (7DCM / 7DCL) |
||
4XC7A80289 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter | Y | Y | Y | Y | N | Y | Y | Y | Y | N | Y | N | N | Y | Y | Y | Y | N | N |
4XC7A86671 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N |
4XC7A86670 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N |
Part Number | Description | Edge | Super Computing | 1S Intel V2 | 2S Intel V2 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SE350 (7Z46 / 7D1X) |
SE350 V2 (7DA9) |
SE360 V2 (7DAM) |
SE450 (7D8T) |
SE455 V3 (7DBY) |
SD665 V3 (7D9P) |
SD665-N V3 (7DAZ) |
SD650 V3 (7D7M) |
SD650-I V3 (7D7L) |
SD650-N V3 (7D7N) |
ST50 V2 (7D8K / 7D8J) |
ST250 V2 (7D8G / 7D8F) |
SR250 V2 (7D7R / 7D7Q) |
ST650 V2 (7Z75 / 7Z74) |
SR630 V2 (7Z70 / 7Z71) |
SR650 V2 (7Z72 / 7Z73) |
||
4XC7A80289 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter | N | N | N | N | N | N | N | N | N | N | N | N | N | N | Y | Y |
4XC7A86671 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC | N | N | N | N | N | Y | N | Y | Y | N | N | N | N | N | N | N |
4XC7A86670 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC | N | N | N | N | N | Y | N | Y | N | N | N | N | N | N | N | N |
Part Number | Description | AMD V1 | Dense V2 | 4S V2 | 8S | 4S V1 | 1S Intel V1 | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SR635 (7Y98 / 7Y99) |
SR655 (7Y00 / 7Z01) |
SR655 Client OS |
SR645 (7D2Y / 7D2X) |
SR665 (7D2W / 7D2V) |
SD630 V2 (7D1K) |
SD650 V2 (7D1M) |
SD650-N V2 (7D1N) |
SN550 V2 (7Z69) |
SR850 V2 (7D31 / 7D32) |
SR860 V2 (7Z59 / 7Z60) |
SR950 (7X11 / 7X12) |
SR850 (7X18 / 7X19) |
SR850P (7D2F / 2D2G) |
SR860 (7X69 / 7X70) |
ST50 (7Y48 / 7Y50) |
ST250 (7Y45 / 7Y46) |
SR150 (7Y54) |
SR250 (7Y52 / 7Y51) |
||
4XC7A80289 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter | N | N | N | Y | Y | N | N | N | N | Y | N | N | N | N | N | N | N | N | N |
4XC7A86671 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N |
4XC7A86670 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N | N |
Part Number | Description | 2S Intel V1 | Dense V1 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ST550 (7X09 / 7X10) |
SR530 (7X07 / 7X08) |
SR550 (7X03 / 7X04) |
SR570 (7Y02 / 7Y03) |
SR590 (7X98 / 7X99) |
SR630 (7X01 / 7X02) |
SR650 (7X05 / 7X06) |
SR670 (7Y36 / 7Y37) |
SD530 (7X21) |
SD650 (7X58) |
SN550 (7X16) |
SN850 (7X15) |
||
4XC7A80289 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter | N | N | N | N | N | N | N | N | N | N | N | N |
4XC7A86671 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC | N | N | N | N | N | N | N | N | N | N | N | N |
4XC7A86670 | ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC | N | N | N | N | N | N | N | N | N | N | N | N |
Operating system support
The adapters support the operating systems listed in the following tables.
- ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter, 4XC7A80289
- ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC, 4XC7A86671
- ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC, 4XC7A86670
Tip: These tables are automatically generated based on data from Lenovo ServerProven.
Operating systems | SR630 V3 (4th Gen Xeon) |
SR630 V3 (5th Gen Xeon) |
SR635 V3 |
SR645 V3 |
SR650 V3 (4th Gen Xeon) |
SR650 V3 (5th Gen Xeon) |
SR655 V3 |
SR665 V3 |
SR675 V3 |
SR850 V3 |
SR860 V3 |
SR630 V2 |
SR650 V2 |
SR670 V2 |
SR850 V2 |
SR645 |
SR665 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Microsoft Windows Server 2019 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Microsoft Windows Server 2022 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 8.6 | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 8.7 | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 8.8 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 8.9 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.0 | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.1 | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.2 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.3 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
SUSE Linux Enterprise Server 12 SP5 | N | N | N | N | N | N | N | N | N | N | N | Y | Y | Y | Y | Y | Y |
SUSE Linux Enterprise Server 15 SP3 | N | N | N | N | N | N | N | N | N | N | N | Y | Y | Y | Y | Y | Y |
SUSE Linux Enterprise Server 15 SP4 | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
SUSE Linux Enterprise Server 15 SP5 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Ubuntu 22.04 LTS | Y | N | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Operating systems | SD650 V3 |
SD650 V3 (5th Gen Xeon) |
SD650-I V3 (4th Gen Xeon) |
SD650-I V3 (5th Gen Xeon) |
---|---|---|---|---|
Red Hat Enterprise Linux 8.6 | Y | N | Y | N |
Red Hat Enterprise Linux 8.8 | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.0 | Y | N | N | N |
Red Hat Enterprise Linux 9.2 | Y | Y | Y | Y |
SUSE Linux Enterprise Server 15 SP4 | Y | N | Y | N |
SUSE Linux Enterprise Server 15 SP5 | Y | Y | Y | Y |
Ubuntu 22.04 LTS | Y | N | Y | N |
Ubuntu 22.04.3 LTS | N | Y | N | Y |
Operating systems | SD650 V3 |
SD650 V3 (5th Gen Xeon) |
SD650-I V3 (4th Gen Xeon) |
SD650-I V3 (5th Gen Xeon) |
---|---|---|---|---|
Red Hat Enterprise Linux 8.6 | Y | N | Y | N |
Red Hat Enterprise Linux 8.8 | Y | Y | Y | Y |
Red Hat Enterprise Linux 9.0 | Y | N | N | N |
Red Hat Enterprise Linux 9.2 | Y | Y | Y | Y |
SUSE Linux Enterprise Server 15 SP4 | Y | N | Y | N |
SUSE Linux Enterprise Server 15 SP5 | Y | Y | Y | Y |
Ubuntu 22.04 LTS | Y | N | Y | N |
Ubuntu 22.04.3 LTS | N | Y | N | Y |
Regulatory approvals
The adapters have the following regulatory approvals:
- Safety: CB / cTUVus / CE
- EMC: CE / FCC / VCCI / ICES / RCM / KC
- RoHS: RoHS Compliant
Operating environment
The adapters have the following operating characteristics:
- Maximum power available through OSFP port: 17W
- Temperature
- Operational: 0°C to 55°C
- Non-operational: -40°C to 70°C
- Humidity: 90% relative humidity
Warranty
One year limited warranty. When installed in a Lenovo server, the adapter assumes the server’s base warranty and any warranty upgrades.
Related publications
For more information, refer to these documents:
- Networking Options for ThinkSystem Servers:
https://lenovopress.com/lp0765-networking-options-for-thinksystem-servers - ServerProven compatibility:
http://www.lenovo.com/us/en/serverproven - NVIDIA InfiniBand product page:
https://www.nvidia.com/en-us/networking/infiniband-adapters/ - ConnectX-7 user manual:
https://docs.nvidia.com/networking/display/ConnectX7VPI/Specifications
Related product families
Product families related to this document are the following:
Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ServerProven®
ThinkSystem®
The following terms are trademarks of other companies:
Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Linux® is the trademark of Linus Torvalds in the U.S. and other countries.
Microsoft®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Configure and Buy
Full Change History
Changes in the December 6, 2023 update:
- Removed the restriction that these adapters were only orderable in DCSC using the HPC & AI mode of the configurator. Now orderable via the General Purpose mode.
Changes in the September 29, 2023 update:
- The adapters support secure boot, however they do not include embedded cryptography functions - Technical specifications section
First published: May 2, 2023