草榴社区

From XSR to Long-Reach: Avoid Cloud Data Traffic Jams with High-Speed Interface IP

Scott Durrant, Strategic Marketing Manager, 草榴社区

Introduction

Streaming media, monitoring and surveillance data, connected sensors, social media, online collaboration, remote learning, augmented and virtual reality, online gaming… the never-ending list of online applications has led to an explosion of online data. Annual data traffic is expected to increase by over 400x over the next 10 years (Figure 1). This rapid increase in data traffic will require significant improvements in speed and latency of data interface IP, especially in the cloud infrastructure. This article examines technology developments that will help accelerate and manage data movement within and between data centers, within servers, and within system-on-chip (SoC) packages.

Figure 1: Total Data Traffic Forecast through 2030. Source: "Impact of AI on Electronics and Semiconductor Industries", IBS, April 2020.

Long-reach Data Movement Within and Between Data Centers

Most of today’s large data centers use 100Gbps Ethernet infrastructure to move data over long distances (e.g., between racks and data centers). Long-reach infrastructures typically rely on 4 channels of 25 or 28 Gbps NRZ SerDes electrical connectivity. However, as data volume grows, higher speed infrastructure is needed to sustain data movement. Using 56 and 112 Gbps SerDes IP that supports PAM-4 encoding enables 400Gbps Ethernet connectivity in hyperscale data centers being deployed today, as well as speeds up to 800Gbps in the future (Figure 2).  Leading Ethernet switch vendors are already developing 800Gbps switches based on 112G SerDes IP, with plans to introduce 1.6Tbps Ethernet (using a faster, next-generation SerDes) within the next few years to meet the demands of increasing data volumes. 

Data communication between servers within a rack is managed by the Top-of-Rack (ToR) switch and network interface cards (NICs) within each server. The most common interface speed in cloud data centers at this level has been 25Gbps for the past few years. However, as infrastructure speeds increase to 400Gbps, Ethernet speed within the rack is increasing to 100Gbps.

With data rates increases, interface power (typically measured in picoJoules/bit) and area become increasingly important. Physical interface (PHY) IP that minimizes energy use while delivering data reliably across required distances has distinct advantages for minimizing the cost of infrastructure power and cooling capacity. Silicon area-efficient PHY solutions minimize SoC cost, which improves profitability for the SoC vendor. 

Figure 2: Hyperscale data center infrastructures are transitioning to 400+GbE

Data Movement Within Servers

Once all this data reaches the server, high-speed interfaces are needed to move it efficiently from device to device within the server. For example, as data arrives at the NIC at 100Gbps, it must quickly be moved to storage, system memory, or perhaps to a graphic or AI accelerator for processing. This is the realm of PCI Express (PCIe), Compute Express Link (CXL), and similar protocols. To handle the rapid increase of traffic, the PCI-SIG released PCIe 5.0 in 2019, doubling the bandwidth compared to the prior generation, and is targeting a 2021 release of PCIe 6.0, which will again double the PCIe data rate to 64 GT/s (up to 128GB/s for a x16 card) (Figure 3).

Figure 3: PCI Express bandwidth evolution, per lane. Source:

Recent and continuing growth in the volume of data, especially unstructured data, generated and processed by compute systems has given rise to new architectures, often employing accelerators to facilitate data processing. Copying data from one processor domain to another is a resource-intensive process that can add significant latency to data processing. Cache coherent solutions allow processors and accelerators to share memory without the need to copy data from one memory space to the other, saving both memory resources and the time that would otherwise be required for the data to copy.

CXL is a cache-coherent protocol that leverages the data rate and physical layer of PCIe to enable CPUs and accelerators to access each other’s memory. Integrating the CXL protocol effectively reduces the number of data copies that must occur with non-coherent protocols when multiple devices need to access a single data set, thereby reducing the number transfers required within a system. Reducing the number of data copies helps to reduce load on the already heavily subscribed memory and IO interfaces in the system.

Targeted at high performance computational workloads, CXL significantly reduces latency compared to other peripheral interconnects. With latency of just 50-80ns for cxl.cache and cxl.mem transactions, CXL latency is only a fraction of PCIe latency. Furthermore, CXL improves performance and reduces complexity through the use of resource sharing, which can also lower overall system cost.

USR/XSR Data Movement Within SoCs

Many modern server SoCs utilize multiple die within a single package to deliver needed performance under design and manufacturing constraints. As a result, high-speed die-to-die (D2D) communication is needed to pass large data sets between die within a chip. Ultra Short Reach/Extra Short Reach (USR/XSR) SerDes make this possible, with current designs using 112Gbps SerDes and higher speeds likely to come within the next couple of years.

Multi-chip modules using D2D interface technology address multiple use cases. All D2D use cases reduce development time and development and manufacturing costs. Some use multiple heterogeneous die, or “chiplets,” that take advantage of reusable functional components that are each built using a manufacturing technology optimal for their specific functionality. Other use cases focus on flexibility by creating a large, high-performance SoC that uses smaller homogenous building blocks for improved yield and scalability.

Figure 4: Example use cases for die-to-die interconnects

Summary

The rapid growth of cloud data is driving demand for faster and more efficient interfaces to move data within the cloud infrastructure from the network and system down to chip-level data communications. New and developing interface technologies, including 400Gbps and faster Ethernet, PCIe 6.0 and CXL peripheral interconnect technologies, and new high-speed SerDes for die-to-die communications enable the infrastructure enhancements required to support evolving cloud data needs.

DesignWare? High-Speed SerDes and Ethernet IP by 草榴社区 enable next generation data center networking solutions. DesignWare PCIe IP is used by 90% of leading semiconductor companies and is a stable and proven foundation for DesignWare CXL IP. DesignWare 112G USR/XSR SerDes IP provides a low-cost, energy efficient die-to-die interface for multi-die SoCs. 草榴社区 provides a complete portfolio of silicon-proven DesignWare interface IP and the design and validation tools necessary to develop high speed, low power, highly reliable SoCs that support the evolving data movement needs of today’s and tomorrow’s cloud infrastructure.