Cloud native EDA tools & pre-optimized hardware platforms
Compute Express Link? (CXL?) 3.0 is an open standard that defines high-speed cache-coherent interconnect and memory expander interconnect for CPU-to-device and CPU-to-memory connections. It is built on PCI Express? (PCIe?) 6.0 r1.0 specifications and leverages PCIe for physical and electrical interface.
Artificial Intelligence (AI) and Machine Learning (ML) applications and widespread smart devices (e.g., autonomous vehicles) are driving factors behind exponentially rising requirements to build high-performing data center units that involve CPUs connected with accelerator processors, memory attached devices, and SmartNICs. These systems demand low latency requirements for CPU-attached devices to perform compute-intensive operations on massive data while maintaining coherency. To meet the increasing performance and scale requirements of these systems, the CXL Consortium has evolved its standard through the introduction of CXL 3.0.
An overview figure below provides an overview of Flit formats in CXL 3.0. In addition to 68B Flit mode as in CXL 2.0, CXL 3.0 introduces Standard and Latency optimized 256B formats.
With separate format definitions for CXL.io & CXL.cachemem sums up to 4 new formats. Slot encodings are almost doubled and taken care of by the Data link layer with the packing payload. The Flit header bytes, Cyclic Redundancy Check (CRC), and Forward Error Correction (FEC) bytes are added by the FlexBus layer for error correction, detection & retry.
The changes introduced with CXL 3.0 affect all layers, creating increased verification complexity.
At the physical layer, 64GT/s speed support is achieved using PAM4 encoding. It brings in FEC complexity along with the existing CRC mechanism. Placement of Replay Buffer in Flex Bus brings in another design change for designs migrating from CXL 2.0 to CXL 3.0. This demands in-depth verification of sequence numbering, FLIT replay command handshakes, and partial/full replay mechanisms to provide guaranteed FLIT transfer across to the link partners. Upper layers including ARB/Mux and IO/CM Link layers rely on this feature as their respective ALMP and power management handshakes are now simplified.
At the link layer, new FLIT modes – namely standard 256B, latency optimized 256B, and PBR 256B, demand different packing rules for protocol packets into FLITs increasing design complexity.
For backward compatibility reasons, 256B FLITs are supported at 8/16/32 GT/s speeds. This creates a parallel data path flow for the FlexBus layer for 68B vs 256B FLITs using 128b/130b encoding in addition to 256B FLIT support using PAM4 encoding. This creates a multi-dimensional verification complexity.
At the protocol layers, CXL 3.0 brings in a vast array of new features which are orthogonal to link and lower layer updates. This adds a new dimension to verification. It adds a new requirement to validate protocol features independent of design development and upgrades to lower layers.
To mitigate the verification complexity of CXL 3.0, 草榴社区 delivers the industry’s first CXL Subsystem Verification IP solution, available immediately.
[Image 1 – Solution Overview]
CXL 3.0 has several layers of protocol orchestrating to enable traffic flows. 草榴社区 CXL VIP has the ability to observe the semantics at different layers which eases debugging.
“Verification solutions for new protocols such as PCIe 6.0, CXL 2.0, and now CXL 3.0 demonstrate 草榴社区’ continued support for innovations built on industry standards,” said Kurt Lender, CXL Consortium Marketing Working Group Co-Chair and IO Strategist, Intel Corporation. “The immediate availability of 草榴社区 Verification IP for CXL 3.0 strengthens the ecosystem and facilitates the adoption of the technology for compute-intensive workloads.&苍产蝉辫;”
The snapshots below provide an overview of the link up to 64GT/s and the transfer of 256B FLITs between link partners. You can also observe the details of APN status and transfer statistics.
[Image 2 – Waveform snippet showing 64 GT/s link-up and FLIT handshake]
[Image 3 – FLIT trace file from ARB/Mux indicates CXL IO FLITs, CM FLITs, and ALMPS multiplexed for Standard 256B Flit mode]
[Image 4 – FLIT trace file from ARB/Mux indicates CXL IO FLITs, CM FLITs, and ALMPS multiplexed for68B modes]
[Image 5 – Link statistics summary]
草榴社区 Verification IP for CXL is designed to address all the verification complexities of CXL 3.0. It provides easy-to-use APIs for migration from CXL 2.0/PCIe 6.0 to CXL 3.0 domain.
Target users for CXL 3.0 are at the system level. Running System level payload on SoCs requires a faster hardware-based pre-silicon solution. 草榴社区 transactors based on 草榴社区 IP enable fast verification hardware solutions including 草榴社区 ZeBu? emulation systems and 草榴社区 HAPS? prototyping systems for validation use-cases.
草榴社区 protocol verification solutions are natively integrated with the 草榴社区 Verification Family of products including 草榴社区 Verdi? and regression management and automation with 草榴社区 VC Execution Manager.
To learn more about 草榴社区 VIP for CXL, please visit /verification/verification-ip/subsystems/compute-express-link.html.
In addition, 草榴社区 offers CXL IP solutions including a controller with IDE security and PHY, all of which support the CXL 3.0 specification.