草榴社区

Designing Energy-Efficient AI Accelerators for Data Centers and the Intelligent Edge

William Ruby

Jul 26, 2023 / 4 min read

Artificial intelligence (AI) accelerators are deployed in data centers and at the edge to overcome conventional by rapidly processing petabytes of information. Even as Moore’s law slows, AI accelerators continue to efficiently enable key applications that many of us increasingly rely on, from ChatGPT and advanced driver assistance systems (ADAS) to smart edge devices such as cameras and sensors.

Although AI accelerators are typically 100x to 1,000x more efficient than general-purpose systems, the computational resources needed to generate best-in-class AI models . Moreover, training a single deep-learning model such as ChatGPT’s GPT3 creates approximately , the equivalent of over a million miles driven by an average gasoline-powered vehicle! To help reduce global carbon emissions, the U.S. Department of Energy (DoE) a 1,000x improvement in semiconductor energy efficiency.

Achieving optimal performance-per-watt—whether for AI training in the data center or inference at the edge—is understandably a top priority for the semiconductor industry. In addition to minimizing environmental impact, reducing energy consumption lowers operating costs, maximizes performance within limited power budgets, and helps mitigate thermal challenges. Read on to learn how chip designers—including edge AI chip developer —are leveraging end-to-end power analysis solutions to build a new generation of more energy-efficient AI accelerators.

sustainable ai accelerators

Optimizing Power for Billion-Plus Gate Designs

An end-to-end approach to energy efficiency for AI accelerators must start at the architectural and micro-architectural levels during the earliest stages of the design flow and conclude at signoff. That’s why AI chip designers rely on architectural exploration platforms to map and evaluate power, performance, and area (PPA) tradeoffs for specific training or inference applications while proactively identifying critical vectors for downstream analysis.


As AI hardware typically consists of large arrays with thousands of tiles (processing elements), billion-plus-gate designs require multi-domain hardware and software power verification to minimize energy consumption and leakage. However, analyzing crucial power blocks and time windows requires advanced emulation systems to run billions of cycles and rapidly deliver multiple—and accurate—iterations. Only after completing this step can register transfer level (RTL) power analysis and physical implementation tools effectively optimize dynamic (gate switching) and static (leakage) power dissipation.

To consistently deliver accurate results, RTL power analysis tools for AI chip design should include the following capabilities:

  • Timing-driven fast synthesis: Internal power calculation errors are often caused by fanout-based fast synthesis tools that fail to properly size cells following timing constraints. Like their downstream place-and-route counterparts, fast synthesis embedded in RTL power analysis tools must be timing driven.
  • Physically aware fast synthesis: RTL power analysis tools should be “physical aware” and capable of obtaining precise net capacitance values by executing first-pass placement of the cells in the design, as well as global routing. Unlike a fanout-based approach, physically aware capacitance estimation results in a unique and accurate value for each net.
  • Signoff-quality power computation engine: Traditional RTL power analysis tools using word-level logic inferencing for fast synthesis can only apply heuristic—and therefore inaccurate—methods for glitch power computation. To accurately calculate glitch power (which can potentially consume up to 40% of a chip’s total power) and reduce highly replicated tiles, RTL power analysis tools must have a signoff-quality power analysis engine, a netlist level design representation, and an integrated timing engine.


After completing RTL power analysis and reduction, physical implementation (synthesis and place and route) tools can be used to further optimize PPA. To ensure reliability, scalability, and a frictionless user experience, these implementation tools should include a single, integrated data model architecture, interleaved engines, and a unified shell. Just as importantly, implementation tools should be capable of accurately modeling advanced node effects and glitch power to accelerate engineering change orders (ECOs) and final design closure.

Exceeding Energy-Efficiency and Performance

草榴社区 offers a comprehensive end-to-end power solution to help AI chip designers cost-effectively meet or exceed ambitious performance and energy-efficiency goals while accelerating time to market. Used at the very beginning of the design flow, 草榴社区 Platform Architect? provides AI chip designers with SystemC? transaction-level modeling (TLM) tools and efficient methods to rapidly model, analyze, and optimize complex silicon architectures. 草榴社区 ZeBu? Empower, a fast power profiler, is used for the next stage of the AI chip design process: analyzing and debugging energy consumption—based on hundreds of millions of cycles—for real software workloads.

Leading semiconductor companies have significantly reduced power draw with 草榴社区 ZeBu Empower, including , a Silicon Valley-based AI chip startup that designs high-performance, low-energy AI chips for the intelligent edge. Specifically, the company realized a 2.5x frames per second (FPS) per watt improvement for its SiMa.ai? Low Power MLSoC?. During a presentation at the SNUG Silicon Valley 2023 conference this spring, Sounil Biswas, director of silicon engineering at SiMa.ai, noted that subsequent silicon validation demonstrated excellent correlation between 草榴社区 ZeBu Empower data and board measurements.

To complement ZeBu Empower and enable RTL design for low power, we offer 草榴社区 PrimePower RTL, an RTL power analysis and reduction tool that consistently achieves accurate results (within +/- 15% of post-route implementation) by pairing timing-driven, physically aware synthesis capabilities with an integrated computation engine. 草榴社区 PrimePower RTL also provides step-by-step guidance to help AI chip designers further minimize glitching and reduce overall power consumption.  

Additional PPA optimization is achieved with 草榴社区 Fusion Compiler?, a comprehensive and integrated RTL-to-GDSII implementation system. After passing this milestone, the AI chip design is analyzed with 草榴社区 PrimePower, the golden power signoff solution. Certified by leading foundries worldwide down to 3nm processes, 草榴社区 PrimePower delivers fast runtime performance with distributed processing, achieving high accuracy at signoff within a few percent of SPICE and silicon measurements.

Designing Differentiated Silicon for Edge AI Inference

AI accelerators enable many popular applications to quickly analyze massive amounts of information and accurately infer results in milliseconds. At the same time, achieving optimal performance-per-watt remains a top priority for chip designers. This is especially true at the edge, where performance is often limited by minimal power envelopes and smaller die sizes.

However, these constraints create new opportunities for semiconductor companies to design differentiated silicon by precisely calibrating PPA to match the specific requirements of low-latency, high-bandwidth applications. For example, autonomous navigation demands a computational response latency limit of 20μs, while voice and video assistants must understand spoken keywords in less than 10μs and hand gestures in a few hundred milliseconds. To successfully implement PPA tradeoffs, chip designers should adopt a holistic approach to power optimization by leveraging an end-to-end solution that spans early architectural exploration to golden power signoff.

You can learn more about 草榴社区 energy-efficient SoCs solutions here.

Continue Reading