草榴社区

草榴社区 Hosts Optica’s Photonic Enabled Cloud Computing Summit

Mitch Heins, Jigesh Patel

Nov 05, 2024 / 4 min read

On October 21st and 22nd, 草榴社区 hosted the , bringing together over 200 photonic luminaries from around the world. These experts shared their groundbreaking work aimed at enabling faster and more power-efficient cloud computing environments. The spotlight of this year’s event was on the infrastructure needed to scale up and out to meet the increasing compute demands placed on data centers by AI-based applications such as ChatGPT.


Keynote Speakers

  • Jose Pozo, Optica
  • Jeff Maki, Juniper Networks
  • Karun Sharma, 草榴社区
  • Ashkan Seyedi, NVIDIA
  • Nicolas Viljoen, Meta
  • Marco Fiorentino, HPE
  • Nikhil Angra, AFL
  • Matt Traverso, Marvell
  • Andy Bechtolsheim, Arista
  • Ram Huggahalli, Microsoft
  • Thomas Liljeberg, Intel
  • Near Margalit, Broadcom
  • Hamid Arabzadeh, Ranovus
  • Rangchen Yu, Innolight
  • Michael Fortsch, Q.ANT
  • Manmeet Walia, 草榴社区
  • Vikas Gupta, Global Foundries
  • Ed Preisler, Tower Semi
  • Julie Adams, IBM
  • Azmina Somani, Jabil
  • Suresh Venkatesan, POET Technologies
  • Sanjay Gangadhara, Ansys
  • Peter Winzer, Nubis
  • David Lazovsky, Celestial AI
  • Daniel Perez-Lopez, iPRONICS
  • Amit Khanna, Teradyne
  • Rebecca Schaevitz, MixxTech
  • Pouya Dianat, QCI
  • Julie Eng, Coherent
  • Matt Sysak, Lumentum
  • Andrew McKee, Sivers
  • Atikem Haile, Lightwave Logic
  • Mian Zhang, Hyperlight
  • Christoph Pfistner, Avicena
  • Aleksandra Boskovic, Corning

The event featured dynamic presentations and Q&A sessions where attendees actively challenged each other's opinions on various topics. Ashkan Seyedi from NVIDIA kicked off by stating that optical links were still three times too expensive compared to copper solutions for NVIDIA’s NVLink architecture. While NVIDIA acknowledged the clear advantages of optical links, they emphasized that the industry needed further maturation and cost reduction before committing to it for their GPU solutions for AI.  See figure 1 below, Ashkan’s AI Hype Cycle. 

Figure 1: Ashkan’s AI Hype Cycle

Figure 1: Ashkan’s AI Hype Cycle presented by Ashkan Seyedi of NVIDIA. Courtesy of NVIDIA.

This was followed by numerous presentations aiming to counter Ashkan's perspective, including one from Hewlett Packard Enterprise, his previous employer. Marco Fiorentino of HPE posed the question of why not go "Wide and Fast" instead of the traditional "Wide and Slow." Thomas Liljeberg from Intel argued that the scaling requirements of AI infrastructure necessitate a new solution for optical interconnects, presenting a highly integrated 4Tbps PIC for co-packaged optical compute interconnect (OCIBroadcom suggested deploying CPO on Ethernet switches used in scale-out applications as an initial step to build experience and confidence before migrating to CPO for scale-up architectures. Karun Sharma from 草榴社区 highlighted the need for a complete end-to-end solution for multi-die architectures, including photonics, and summarized 草榴社区’ role in enabling the integrated photonic manufacturing ecosystem. 

Figure 2: Karun Sharma of 草榴社区

Manmeet Walia of 草榴社区 provided an overview of linear optical interface variant, discussing the need for retimed, half retimed, and full retime implementations with UCIe, PCIe and Ethernet. He invited the audience to a hardware demonstration of the 草榴社区 PCIe 7.0 end-to-end linear optical drive and 草榴社区 UCIe PHY IP. The demo, complemented by E-O co-simulation in the 草榴社区 OptoCompiler platform, attracted keen interest and discussions.

Figure 3: Hardware and Simulation Demonstration of 草榴社区 PCIe 7.0 showcasing excellent performance with off-the-shelf LPO modules

Figure 3: Hardware and simulation demonstration of 草榴社区 PCIe 7.0 showcasing excellent performance with off-the-shelf LPO modules 

Key Themes and Takeaways

  • AI Training Strains Data Centers: AI training is significantly increasing power consumption in data centers, with interconnects becoming a large component.
  • Optical Links for Reach: Optical links enable the reach that copper cannot achieve without great expense, especially as GPU Pods outgrow single data centers.
  • Power Consumption: Optics power consumption in AI data centers could reach as high as 30% of the total power.
  • Linear Pluggable Optical (LPO): LPO solutions are gaining traction for short reaches, promising to remove power-hungry DSPs.
  • Co-Packaged and Near-Packaged Optics: These solutions promise super-high bandwidth density and power efficiencies despite concerns about cost, reliability, and serviceability.
  • Thin Film Lithium Niobate (TFLN): TFLN was discussed for its low Vpi modulators and non-linear domain capabilities, useful for quantum computing.
  • Optical Circuit Switching (OCS): OCS is seen as a high-bandwidth, low-energy alternative to electrical switches, though concerns remain about the software stack and ecosystem.

Other general themes included the journey to sub-pico-Joule per bit energy consumption, the permanence of PAM4 modulation format, suitability of coherent optics in scale out vs. scale up connectivity, the need for evolving standards, and the importance of reliable optics for AI training applications.

Conclusion

In summary, the event was highly educational and information-rich, earning much praise from the engaged audience. A big thank you to Jose Pozo and the Optica team for making this event a success!


About Optica: Optica Industry Summits are two-day programs featuring panel presentations, in-person networking opportunities, and external company visits and tours. Optica, Advancing Optics and Photonics Worldwide, is dedicated to promoting the generation, application, archiving, and dissemination of knowledge in the field. Learn more at .

Figure 4: Jose Pozo from Optica


Continue Reading