Cloud native EDA tools & pre-optimized hardware platforms
Rita Horner, Senior Technical Marketing Manager, 草榴社区
The exponential growth of data traffic due to smartphones, media applications, video streaming, and the broad range of connected devices has driven the construction of high-capacity hyperscale data centers to quickly scale and respond to such dynamic workloads.
According to , hyperscale data centers are projected to grow to 628 centers by 2021 (up from 338 centers at the end of 2016), representing 53% of all installed data center servers. The report also notes that by 2021, traffic within hyperscale data centers will be accounted for 55% of the total traffic within all data centers, and 94% of all workloads will be processed by cloud data centers versus only 6% by traditional data centers.
As hyperscale data centers transition to faster, flatter, and more scalable network architectures, such as the 2-tier leaf-spine as shown in Figure 1, the need for higher bandwidth with efficient connectivity increases.
Figure 1: Leaf-and-spine architecture
The leaf-spine architecture requires massive interconnects as each leaf switch fans-out to every spine switch, maximizing connectivity between servers. Hardware accelerators, artificial intelligence, and deep learning functions in data centers all consume high bandwidth, forcing high-end data centers to quickly move to next generation interconnects operating at higher data rates. Because of this reason, majority of the hyperscale data centers using 100 Gb/s Ethernet links would need to transition to 200 Gb/s and 400 Gb/s Ethernet links to achieve higher throughput.
The move toward 400 Gb/s Ethernet promises both power and area savings, as 400 Gb/s optical modules are expected to consume only 2.5x the power of a 100 Gb/s Ethernet links and maintain the same small form factors, increasing interconnect densities.
With the completion of the 200 Gb/s and 400 Gb/s IEEE 802.3bs specification (now part of the IEEE 802.3-2018 release) and the completion of the 50 Gb/s, 100 Gb/s, and 200 Gb/s IEEE 802.3cd specifications, hyperscale data centers will start moving to 200/400 Gb/s interconnects by the end of 2018. By the recent announcement of vendor demonstrations and the expected availability of components and optical modules, this transition is inevitable.
As described in Figure 2, majority of 100/200/400 Gb/s Ethernet are based on multi-lane 25 Gb/s or 50 Gb/s interfaces. Next generation 100/200/400 Gb/s Ethernet rates will be based on the new 100 Gb/s serial specification that IEEE 802.3ck working group is defining, enabling beyond 400 Gb/s Ethernet.
Figure 2: Evolution of Ethernet speeds
As 100 Gb/s Ethernet matures in hyperscale data centers, the cost sensitive enterprise data center ecosystem will start taking advantage of the current generation technologies and begin their own transition from today’s 10 Gb/s and 40 Gb/s links to 100 Gb/s and higher rates.
Network engineers can choose interconnects based on application space, length requirements, density, form factor, power consumption, and available products. This article describes the different variants of 50 Gb/s, 100 Gb/s, 200 Gb/s, and 400 Gb/s interconnects that are based on single- and multi-lane 50 Gb/s Ethernet, including:
Chip-to-chip and chip-to-module (C2M) are the simplest forms of interconnects that consist of a short printed circuit board (PCB) trace with one or no connector. The chip-to-chip electrical interface is between two ICs on the same PCB plane, while the chip-to-module interface is between a port ASIC and a module device with a signal conditioning IC, such as a retimer placed on a separate PCB plane.
The IEEE 802.3 has defined the attachment unit interfaces (AUIs) based on 50 Gb/s per lane electrical for different types of optical modules. Depending on the interconnect length and throughput requirement, a network implementer may select different chip-to-module interfaces for connection to an optical module. For example:
50GAUI-1 is a single-lane C2M interface to:
100GAUI-2 is a two-lane C2M interface to:
200GAUI-4 is a four-lane C2M interface to:
400GAUI-8 is an eight-lane C2M interface that includes:
There are many optical modules used in Ethernet links. Ideally, the higher rate modules use the same form-factors as the previous generations for backward compatibility and ease of adoption. However, in the past, larger form-factor modules were initially introduced into the market. As technologies matured and form-factors optimized, larger form-factors were replaced with smaller, denser, and lower cost technologies. This was observed with the adoption of 25 Gb/s Ethernet links as well as the 10 Gb/s Ethernet links where the initial bulky modules were replaced by much smaller pluggable (SFP+) modules.
The module form-factors are defined by different multi-source agreements (MSAs). The higher density pluggable optical transceivers (Figure 4) that support 400 Gb/s Ethernet links are:
Image courtesy of QSFP-DD: ; image courtesy of OSFP:
Figure 3: 400 Gb/s transceiver form factors - the QSFP-DD (left) and OSFP (right)
In addition to the C2C and C2M portside interfaces for optical cable connectivity, the IEEE 802.3 has also defined the electrical for 50 Gb/s, 100 Gb/s, and 200 Gb/s transmission over single- or multi-lane twinaxial copper cabling:
These shorter copper-cable interconnects are the most cost-effective cabling solution for connectivity within a rack. Such cables are used for server connection to uplink switches that may be mounted either at the top of the rack or in the middle of the rack to minimize interconnect lengths.
The more complex and larger switches and servers are chassis-based where there is a need for interconnect within a box. The IEEE 802.3cd defined the electrical for 50 Gb/s, 100 Gb/s, and 200 Gb/s transmission across backplanes:
The rising data rates required to process high-performance workloads such as deep learning and video streaming in hyperscale data centers are driving the need for faster, flatter, and more scalable network architectures operating at 400 Gb/s and beyond. The increased bandwidth demand is also driving changes in both Ethernet interconnects and PHY technologies. It is important for system and SoC designers to understand the characteristics of different types of interconnects and the PHY technologies for their target applications.
Designers need reliable interface IP that can support different 50 Gb/s Ethernet electrical interfaces in a single PHY, which is independent of the channel type, and is verified and licensed from a single IP vendor. This combination provides the flexibility, optimal cost, and ROI as well as short time-to-market windows. 草榴社区’ silicon proven DesignWare 56G Ethernet PHY IP has the necessary features and capabilities to drive and aggregate all different 50 GE/100 GE/200 GE/ 400 GE interfaces necessary for ASICs applications in hyperscale data centers.