Cloud native EDA tools & pre-optimized hardware platforms
The Hot Chips 2024 conference, taking place this week in Silicon Valley, is a showcase for AI chip innovation. The three-day program illustrates the race among both established chipmakers and new entrants to explore advanced architectures and embrace novel design solutions to deliver the next breakthrough AI processor. In this article, we’re sharing a few “hot takes” from this week’s conference that illustrate how ecosystem players are adapting to the challenges and opportunities of powering this era of pervasive intelligence.
A clear theme during the conference is that AI begets AI—AI is both demanding and driving rapid technology innovation across virtually every industry, fueling an exciting trajectory as AI is then applied to achieve step-function gains in computing.
Consider that more than 15 AI processors were presented at this year’s Hot Chips event. Notably, 草榴社区 works with a majority of the participating companies, helping them accelerate and scale their AI chip development. The virtuous AI-fueling-AI cycle has taken root, advancing new architectures and, in some cases, reducing development cycles dramatically from two years to one. Customers are relying on our leading AI-driven EDA solutions to achieve step-function gains in performance, power, and area. Our pioneering work in GenAI for chip design is making experienced engineers more productive and new engineers more quickly proficient, helping to address a growing workforce gap.
During his Hot Chips session, “AI Assisted Hardware Design—Will AI Elevate or Replace Hardware Engineers?,” Stelios Diamantidis, 草榴社区 distinguished architect and executive director of the 草榴社区 GenAI Center of Excellence, discussed how AI has revolutionized semiconductor design optimization, as well as how it is fostering engineering skills, productivity, and creativity that are further accelerated when, you guessed it, GenAI is applied to augment these advancements.
Shown: 草榴社区’ Stelios Diamantidis discussed how Reinforcement Learning and GenAI are changing the trajectory of chip development during the Hot Chips 2024 conference at Stanford University.
New AI chip architectures, like multi-die designs and backside power delivery, would not be possible without early access to standards-compliant, silicon-proven IP. Developing interfaces for each evolving standard and for new foundry manufacturing process technologies while ensuring interoperability can be labor and cost intensive. Consider that AI data requirements for throughput and bandwidth continue to outpace industry standards, fueling new specifications as quickly as every 18 months.
草榴社区’ team of approximately 6,900 IP engineers super-charges customer R&D capabilities by maintaining the industry’s broadest portfolio of standards-compliant, silicon-proven semiconductor IP. Access to silicon-proven IP on leading foundry process technologies lowers the barrier to entry for all chipmakers by de-risking design integration, helping accelerate time to market. Further, 草榴社区 consistently delivers advanced IP ahead of specification ratification—such as our new PCIe 7.0 IP and 1.6T Ethernet IP solutions—to help customers keep pace with the data demands of AI.
Shown: Increasing demand for AI processing is fueling more rapid updates to industry standards to keep pace, such as for HBM. (Source: 草榴社区)
Whether they are designing wafer-scale processors, or smaller, chiplet-based systems on chips (SoCs), Hot Chips presentations illustrated the need for hardware-assisted verification (HAV) solutions to address silicon and packaging complexity, hardware/software quality, and time to market. These solutions are a necessity to meet both the demanding pace of chip development, as well as the precision required to verify and test complex AI chips at scale. Access to these tools creates opportunities for both existing and new AI chipmakers to succeed.
草榴社区 Announces 3X Expansion of ZeBu Cloud Capacity
To meet increasing demand for these capabilities, 草榴社区 is expanding the ZeBu? Cloud service in its U.S. data center to offer 3X more capacity. ZeBu Cloud provides a quick-start, convenient access model across multiple cloud applications. As part of a worldwide, multi-location buildout for customers that want to store and access their data locally in Europe, 草榴社区 has also established a data center certified to host ZeBu Cloud services in The Netherlands. By expanding ZeBu Cloud access around the globe, we’re lowering the barriers to innovation through secure, energy-efficient, reliable cloud access to increase the speed and quality of AI chip development.
Shown: ZeBu Server 5 systems in 草榴社区’ U.S. ZeBu Cloud Data Center provide a high-performance, flexible, energy-efficient, and reliable pre-silicon emulation and software validation solution for AI chip developers. (Source: 草榴社区)
We also observed an impressive 95-97% accuracy between the pre-silicon power emulation estimates and actual silicon performance…"
Srivi Dhruvanarayan
|SiMa.ai
"草榴社区 ZeBu Cloud was instrumental in enabling our team to quickly initiate pre-silicon emulation for optimizing our first MLSoC? edge AI platform," said Srivi Dhruvanarayan, vice president, Hardware, SiMa.ai. "Leveraging the ZeBu Empower pre-silicon power profiling capability, we improved frames per second, per watt by 2.5X in just eight months. We also observed an impressive 95-97% accuracy between the pre-silicon power emulation estimates and actual silicon performance, giving us confidence in the design. Based on these outstanding results and their significant impact on our business, we have also invested in on-premise ZeBu systems to test future MLSoC platform generations."
We successfully taped out our first-generation ET-SoC-1 AI inference chip using ZeBu Cloud…"
Raj Khanna
|Esperanto Technologies
"草榴社区 ZeBu Cloud is a pre-silicon emulation use model ideally suited to our needs for accelerating the development of massively parallel, energy-efficient RISC-V based AI and HPC chips," said Raj Khanna, Vice President of Engineering at Esperanto Technologies. "Scale and flexibility are top priorities for our team to meet hardware and compute demands as well as changing configuration requirements throughout the design cycle. We successfully taped out our first-generation ET-SoC-1 AI inference chip using ZeBu Cloud and are heavily relying on ZeBu Cloud solutions for our next-generation AI and HPC accelerator chip, particularly for bring-up and high-throughput software development. ZeBu Cloud has proven to be an ultra-reliable resource for up-time during our weeks-long, high-volume generative AI and HPC software workload runs."
For more information regarding ZeBu Cloud, visit: /verification/solutions/zebu-cloud-solution.html.
As evidenced at this year’s Hot Chips conference, AI chip innovation is on fire. As the industry expands and pushes existing design boundaries further, we are reminded that convening to share our challenges, successes, and insights is among the most powerful tools we have to innovate.