Cloud native EDA tools & pre-optimized hardware platforms
If you want to learn more about Tom Petty, a web search on his name will serve up lists of the musician’s songs, videos, news articles, and photographs. Most people don’t think about the amount of data, its complexity, and all the processing inside hyperscale data centers that’s involved to deliver, in just seconds, the most relevant information to a query. But if you’re designing data center SoCs, you’re keenly aware of the need for high bandwidth and low latency. These are critical characteristics for the AI and high-performance computing (HPC) applications that make web searches—along with so many other amazing applications—possible.
What’s essential for delivering high bandwidth and low latency in our increasingly hyperconnected, data-intensive, AI-fueled world?
The Ethernet protocol has been the data connectivity backbone of the internet for over five decades. Hyperscalers, with their massive data centers managing zettabytes (and more) of information are playing an integral role in shaping the Ethernet roadmap. As the standard has evolved, its data transfer rates have increased with each generation. At a blazingly fast 1.6Tbps, the latest Ethernet iteration is poised to further transform data centers to meet our incessant demands for information at our fingertips.
While the IEEE, which oversees the Ethernet standard, is expected to finalize the latest iteration of 1.6TbE standard in 2026, a baseline set of features is expected to be completed by 2024 through 802.3dj task force. So now is the time to understand what is needed to design for 1.6T Ethernet.
Cloud and edge computing have grown more pervasive in recent years, with AI quickly becoming the biggest driver of bandwidth. In response to these trends, data centers have transformed from basic servers dealing with a manageable amount of data to complex, sophisticated multi-rack systems that handle far more data than we ever imagined. projects that by 2025, the world will have as much as 175 zettabytes of data, split between the cloud and data centers.
Going forward, data centers are moving towards a disaggregated architecture, where homogeneous resources such as storage, compute, and networking reside in separate boxes connected via electro/optical interconnects. Network architectures themselves are flattening, delivering speed and scalability while driving demand for higher bandwidth and efficient connectivity across longer distances.
Ethernet provides the primary interface for box-to-box connections. It offers advantages such as speed negotiation and support for different kinds and classes of media (such as optical fiber, copper cables, and PCB backplane). With 1.6TbE, the initial applications will likely be compute clusters within data centers that are aimed at processing unstructured workloads on large language models (LLMs). LLMs burst onto the scene with the likes of ChatGPT. These models have trillions of parameters, and their number of parameters doubles every few months. Much of the data is kept in memory and must be processed together, so the system requires many processors connected via a low-latency network. To process such a workload, entire clusters would act as single compute devices, with multiple clusters together processing terabytes of data. Ethernet offers an ideal connectivity protocol for these massive systems (Figure 1).
Figure 1: This diagram highlights how hyperscale data center workloads are driving demand for 1.6T Ethernet.
The data connectivity infrastructure to support 1.6T Ethernet consists of the controller and the physical layer (PHY). The controller, which implements basic Ethernet protocol features within the silicon chip, is made up of the media access control (MAC) layer, the physical coding sublayer (PCS), and the physical medium attachment (PMA) layer. Once integrated, these elements must deliver optimal performance and latency. The PHY, consisting of the PMA and physical medium dependent (PMD) layers, is responsible for transmitting and receiving data. High performance and low latency are also essential for the PHY. Given the high-speed signals that travel across each physical link, forward error correction (FEC) to protect against signal degradation is an essential feature of the PCS. One thing to note: interoperability could be challenging if each of these sublayers comes from a different vendor. See Figure 2 for a depiction of a 1.6T Ethernet subsystem.
The IEEE 802.3dj task force has outlined the PHY and management parameters for operations at 1.6 Terabits per second. To this end, the group has specified a maximum bit error rate (BER) of 10)^-13 at the MAC layer and optional 16 and 8 lane attachment unit interfaces (AUI) for chip-to-module (C2M) and chip-to-chip (C2C) applications using 112G and 224G SerDes. For the PHY, the specification includes transmission over eight pairs of copper twinax cables in each direction with a reach of at least 1m, over eight pairs of fiber up to 500m, and over eight pairs of fiber up to 2km.
Silicon-proven Ethernet PHY and a latency-optimized Ethernet controller IP can support the data transfer speeds and latency required for these designs, while mitigating interoperability concerns. The emergence of 224G SerDes technology along with advancements in MAC and PCS IP has resulted in availability of complete, off-the-shelf solutions aligned to the evolving 1.6T Ethernet standard. As industry standards evolve, demonstrating ecosystem interoperability with multiple channels, configurations, and vendors, gives designers confidence for seamless ecosystem integration.
Figure 3: In the past two years, 草榴社区 224G Ethernet PHY IP has demonstrated industry interoperability in six open platforms and multiple plugfests.
草榴社区 offers a complete solution for 1.6T Ethernet, adopted by multiple customers, designed from the ground up and leveraging our experience and industry leadership developing IP for 400G and 800G. The solution has demonstrated the ability to reduce interconnect power consumption by up to 50% compared to existing implementations. With our expertise in optimizing subsystems, our complete solution enables designers to reduce turnaround time as well as power and latency. The 草榴社区 224G Ethernet PHY IP offers excellent signal integrity and jitter performance , with zero post-FEC BER, and additional margin for channel loss. It supports pulse-amplitude modulation 4-level (PAM-4) and non-return-to-zero (NRZ) signaling to deliver up to 1.6T Ethernet. The new multi-rate Ethernet MAC and PCS controllers with patented Reed Solomon forward error correction (FEC) architecture cut latency by 40% compared to prior generations. Knowing that the configurable Ethernet PHY and controller IP are tested and interoperable, engineers can focus their time on differentiating their designs. Figure 3 shows interoperability demonstrations of the 224G Ethernet PHY IP at various industry events.
草榴社区 also provides the industry’s first 1.6T Verification IP (VIP) enabling early RTL verification, SoC bring-up, and system-level validation, offering designers a fast path to design verification closure. Leveraging our technical expertise, participation with standard committees, and collaboration with key ecosystem and silicon partners, we continue to expand our portfolio, covering automotive BaseT, AVB/TSN, USXGMII, FlexE, and all speed modes up to 1.6T Ethernet. Figure 4 highlights our 1.6T Ethernet IP solution.
Figure 4: 草榴社区 offers the industry's first complete 1.6T Ethernet IP solution, featuring controller, PHY, and verification offerings.
As the relentless desire for data continues unabated, 1.6T Ethernet promises to deliver high-speed data connectivity to meet the demands. Ethernet IP is essential to the infrastructure, facilitating the high bandwidth and low latency that hyperscale data centers require. With our complete 1.6T Ethernet IP solution, built on years of experience with high-speed interfaces, 草榴社区 can help you get a jump on creating next-generation data center solutions today.