Cloud native EDA tools & pre-optimized hardware platforms
“Power is everything, limiting what can be built, what customers can deploy and what our planet can sustain”
Rakesh Chopra, Cisco Fellow in Cisco's Common Hardware Group at OIF
The integration of optical technology in data centers interfaces has been of growing interest as growing data traffic, the push for low-latency networks, and the need for efficient resource aggregation pushes datacenter traffic to the limit. Optical interconnects are emerging as key players in data transport at both board and chip levels, offering solutions to current data transfer limitations and boosting processing speeds with the potential of significant power savings. This technical bulletin article focuses on benefits of implementing Linear Direct Drive (LDD) optical interfaces, the need for electrical-optical co-simulation methodologies for high-speed data interconnects, and results from silicon demonstrations between 草榴社区 112G Ethernet PHY IP and OpenLight’s optical engine.
In today's digital era, routine online activities such as movie streaming, video uploads, and online banking are just the tip of the iceberg when it comes to data center energy consumption. The introduction of sophisticated platforms like "ChatGPT" has drastically altered this landscape, making the energy usage of simpler web functions like basic Google searches seem relatively minor. The modern digital domain is now shaped by the heavy power demands of cutting-edge cloud computing, artificial intelligence, the rapid expansion of 5G networks, the development of autonomous vehicles, and the energy-intensive world of cryptocurrency mining.
These growing power requirements in data centers have made networking hardware a significant factor in the overall operational costs. There is a shift towards designing next-generation network components that are more power-efficient on a per-bit basis. As shown in Figure 1, over the last 12 years, while the total system bandwidth has expanded by a factor of 80, the total system power has only risen by a factor of 22. OIF indicated on their 2022 there is an ongoing improvement in power efficiency, as the power consumed per bit (per second) has been steadily decreasing. However, this improvement is not keeping pace with the needs. A notable trend from Figure 1 is the disproportionately rapid increase in SerDes power, both in the host and in pluggable optics modules, compared to other system components. This underscores the critical need to focus on reducing SerDes power consumption as we develop the next generation of higher-speed electrical interfaces.
Figure 1: Relentless Advancement – 80x BW over 12 Years. Source: - Cisco
Includes in-depth technical articles, white papers, videos, upcoming webinars, product announcements and more.
Moving the optics closer to the switch chip, offers a shorter, lower loss electrical channel and further power savings by eliminating the need for another retimer and reducing the need for complex equalization schemesIn this context, the importance of co-packaged optics (CPO) and linear pluggable optics (LPO) becomes increasingly crucial. According to LPO/CPO ports are projected to constitute over 30% of the total 800G and 1.6T ports installed between 2026 and 2028, highlighting their growing significance in the landscape of data center technology.
“We are at the iPhone moment of AI.”
Nvidia Founder and CEO Jensen Huang
The surge in both business and consumer traffic, predominantly managed by hyperscale data center operators like Alibaba, Amazon, Facebook, Google, and Microsoft, is driving a substantial increase in network traffic. This growth necessitates advancements in data handling and transmission to accommodate high-throughput applications, particularly as computational demands from AI models, like expansive language processing systems, intensify. These developments are leading to increased power consumption and bandwidth requirements. To address this, the implementation of optical interconnects directly connecting GPUs at the shelf level is emerging as a solution, reducing both power usage and latency. Meanwhile, the AI industry, especially with large language models like OpenAI's ChatGPT, is undergoing a significant transformation. Dell'Oro Group predicts that by 2027, accelerated servers, essential for managing the billions or trillions of parameters in these AI models, will make up almost 50% of the server market.
Figure 2: AI Servers as a Percent of Total Market. Source: Dell'Oro Group Data Center IT
Figure 3: 5 to 10x increase interconnects in racks. Source: Meta
In response to the growth of AI/ML applications, there's a strategic shift in data center network architectures towards a flattened structure, aiming to reduce latency for high-speed operations. This evolution involves moving from traditional topologies to more configurations as shown on Figure 2, a change particularly evident in cloud and hyperscale data centers. These data centers are increasingly larger, modular, and homogenous, with workloads distributed across many virtual machines and hosts. By simplifying the network hierarchy, flattening significantly reduces the number of hops between data points, thereby lowering latency and enhancing network performance.
Figure 4: Optical interconnects through compute, storage, cache and switches
The move towards standardizing computational resources within racks in data centers calls for an improved method of resource aggregation, increasingly dependent on the high-capacity and adaptability of optical interconnects. In a typical data center rack setup, where various processors - from general-purpose CPUs to specialized GPUs and accelerators - are interconnected to network interface cards through PCIe and CXL pathways, the need for effective interconnects is crucial. Presently, direct-attach copper (DAC) cabling is commonly used for intra-rack connections, while optical modules handle wider network and external communications. These components, crucial for data transfer and communication, are estimated to account for approximately 25-30% of a data center's total power consumption.
Optical interconnects are increasingly crucial in data centers, especially as they address the limitations of electrical copper interconnects in high data rate environments approaching 224 Gbps, where copper's effectiveness diminishes. This leads to a need for denser interconnect networks, which in turn increases power consumption. Optical solutions, however, can extend reach and offer scalability in data center topologies. The industry is moving towards optical interconnects to reduce latency and signal integrity issues, facilitating data center expansion. Efforts are underway to lower the power consumption of pluggable optics from the current 8 pJ/bit to 6 pJ/bit with onboard optics, aiming for a significant 25% reduction. As seen on Figure 5, co-packaged optics further promise to enhance power efficiency aiming for 3 pJ/bit.
Figure 5: Evolution of Optical Interconnects: Pluggable à On-Board à Co-Packaged
The market is increasingly demanding the integration of optical components as close to computational elements as possible, ideally within a single package. This trend towards integration presents significant challenges, particularly regarding the heat sensitivity of lasers in co-packaged optics. As electrical components heat up during operation, they can raise the ambient temperature, potentially surpassing the optimal operating range of around 70 degrees Celsius for lasers. This increase in temperature can adversely affect laser performance, posing a critical hurdle in the development and implementation of these integrated optical solutions.
Figure 6: Conventional retimed interface diagram (a) vs Direct Drive/Linear Interface (b)
Linear Drive, often referred to as Direct Drive, represents a significant evolution in optical interconnect technology. As illustrated in Figure 6, the anatomy of a Linear Drive system is distinctively simpler compared to traditional setups. In this configuration, the DSP typically found on pluggable optics is eliminated. Instead, the switch ASIC's PHY directly drives an optical engine on a pluggable module. This optical engine does not include retimers or DSPs but is equipped with linear amplifiers. This streamlined approach leads to a more compact and efficient design, making the system less complex yet highly functional.
One of the primary advantages of Linear Drive is a substantial reduction in power consumption. By removing the DSP from the pluggable optics, Linear Drive systems can achieve up to a 25% decrease in power usage. This efficiency is crucial in data center environments where power costs are a significant concern. Furthermore, the simplified design not only reduces complexity but also potentially lowers manufacturing costs. Maintaining the pluggability aspect of traditional systems, Linear Drive offers the ease of plug-and-play operations, making it a practical and efficient solution for modern data centers seeking to optimize their power efficiency without compromising on the flexibility and convenience of pluggable optics.
Despite its advantages, Linear Drive introduces several challenges that necessitate careful consideration. The shift to more complex SerDes to enable direct drive functionality demands advanced design and implementation skills. Managing signal integrity becomes more challenging without a dedicated DSP, putting additional pressure on the switch ASIC's PHY to maintain high performance. This situation also intensifies the need for effective heat management due to the potential for increased thermal loads. Ensuring compatibility between the switch ASIC's PHY and the optical module is another critical challenge, often requiring a coordinated strategy of co-simulation and co-design.
Traditionally, electronic chip designers used EDA tools to model a few photonic components, like lasers or modulators, alongside their electrical counterparts. These designs focused more on optimizing electronics, often simplifying the modeling of photonic components through electrical equivalents. However, this approach faces limitations at high data rates, like PCIe 6.0 and 224G SerDes. In complex photonic circuits, factors like optical reflections, crosstalk, noise, dispersion, and nonlinearities crucially influence performance. Simulating optical signals or components as electrical equivalents can risk inaccurate performance estimations, posing a threat to the commercial viability of these designs. Hence, a comprehensive E-O co-design approach, respecting the unique properties of both domains, is essential for accurate modeling and successful implementation of these advanced systems.
草榴社区, in collaboration with OpenLight, a photonics venture formed with Juniper Networks, has demonstrated the optical eye performance of a linear electrical-optical-electrical link transceiver. This demonstration, depicted in Figure 8, features 草榴社区' 112G Ethernet PHY IP, designed for long-reach (LR) channels, driving the OpenLight photonic integrated circuit (PIC). The Ethernet PHY IP effectively compensates for over 13dB of path loss using advanced equalization schemes, operating at a speed of 106 Gbps and achieved a TDECQ (Transmitter and Dispersion Eye Closure Quaternary) of 1.46 dB,. The OpenLight PIC, adaptable for integration into both pluggable optics and co-packaged form factors, forms the core of the optical side of the transceiver. It utilizes OpenLight’s silicon photonics technology, incorporating an integrated PIC laser and a high-speed electro-absorption modulator. The system achieved an impressive average running bit error ratio (BER) of 7.16x10^-7, showcasing exceptional end-to-end, un-retimed E-O-E link performance.
Figure 7: Linear drive demo showcased during ECOC 2023
Optical interconnects are becoming increasingly crucial as data rates reach 800Gbps and beyond, offering solutions to the limitations of electrical interconnects in terms of latency, bandwidth, and power efficiency. 草榴社区 is at the forefront of linear drive optics with its silicon proven 112G and 224G Ethernet PHY IP. To support these advancements, 草榴社区 offers tools like the OptoCompiler for unified electronic and photonic design and OptSim for photonics simulation, coupled with PrimeSim SPICE and PrimeSim HSPICE for electronic circuit simulation.
In-depth technical articles, white papers, videos, webinars, product announcements and more.
Watch On-Demand →