草榴社区

Cloud Servers

The growth of cloud data is driving an increase in compute density within both centrally located hyperscale data centers and remote facilities at the network edge. The increase in compute density is leading to demand for more energy-efficient CPUs to enable increased compute capability within the power and thermal budget of existing data center facilities. The demand for more energy-efficient CPUs has led to a new generation of server CPUs optimized for performance/watt.

This same increase in data volume is also driving demand for faster server interfaces to move data within and between servers. Movement of data within the server can be a major bottleneck and source of latency. Minimizing data movement as much as possible and providing high-bandwidth, low-latency interfaces for moving data when required are key to maximizing performance and minimizing both latency and power consumption for cloud and HPC applications. To improve performance, all internal server interfaces are getting upgrades:

? DDR5 interfaces are moving to 6400 Mbps

? Doubling the bandwidth of PCIe interfaces as they move from PCIe 4.0 at 16GT/s to PCIe 6.0 at 64GT/s

? Compute Express Link (CXL) provides a cache coherent interface that runs over the PCIe electrical interface and reduces the amount of data movement required in a system by allowing multiple processors/accelerators to share data and memory efficiently

? New high-speed SerDes technology at 56Gbps and 112Gbps using NRZ and PAM4 encoding and supporting protocols enables faster interfaces between devices including die, chips, accelerators, and backplanes

"Compute Express Link is a key enabler for next-generation heterogeneous computing architectures, where CPUs and accelerators work together to deliver the most advanced solutions. With support from leading IP providers like 草榴社区, we're well on the way to a  that will benefit the whole industry."

Dr. Debendra Das Sharma | Intel Fellow & Director of I/O Technology & Standards, Intel

DesignWare IP for Servers

Highlights:

  • DDR memory interface controllers and PHYs supporting system performance up to 6400 Mbps, share main memory with compute offload engines plus network and storage I/O resources
  • USR/XSR IP solutions for reliable die-to-die connectivity leverage high-speed SerDes PHY technology up to 112G per lane and wide-parallel bus technology enabling 4Gbps per pin
  • 56G and 112G Ethernet PHYs and Ethernet controllers for up to 800G hyperscale data center SoCs
  • High-performance, low-latency PCI Express controllers and PHYs supporting data rates up to 64GT/s to enable real-time data connectivity
  • CXL IP is built on silicon-proven DesignWare PCI Express 5.0 IP for reduced integration risk and supports persistent memory for speed approaching that of DRAM with SSD-like capacity and cost 
  • Highly integrated, standards-based security IP solutions enable the most efficient silicon design and highest levels of security
  • Low latency embedded memories with standard and ultra-low leakage libraries provide a power- and performance-efficient foundation for SoCs

 


PODCASTS