草榴社区

2022: The Year of Cloud-Based Chip Design & Verification

Sandeep Mehndiratta

Mar 22, 2022 / 7 min read

What if you could no longer stream movies on Netflix, settle a bill from an online purchase via Paypal, or do your job remotely? That’s a world without cloud computing.

The concept of cloud computing is well-entrenched across most industries and enterprises, with IT professionals, developers, and C-level management all agreeing on the wisdom of relying on public cloud services providers to host and maintain even the most critical data and computing resources. Indeed, our everyday lives would be vastly different if the likes of Amazon, Google, and Microsoft didn’t provide services we depend on through some variation of a cloud model. The on-demand nature of cloud computing provides companies of all types and sizes with increased flexibility and economic efficiency.

But – despite being a foundational element in enabling widespread, better performing, and more secure cloud solutions – the electronics and semiconductor industries have been notably slow in adopting the model for themselves. Gnawing concerns about IP security, the performance demands of ultra-specialized applications, and a general tendency to want to maintain control over things have stood in the way of a broader embrace of cloud computing by designers of leading-edge chips and electronic systems.

Given the big picture trends that will require bigger and better chips, mindsets are starting to change.  New innovations in silicon design are key to powering the communications migration from 4G-5G-6G, meeting demands for big data crunching in drug and bioscience research, environmental modeling, achieving a better understanding of climate change, and exploring new ways to generate energy or produce water. All of these advanced, compute-hungry applications  rely on an unprecedented level of silicon and system complexity that in turn, needs powerful compute backbones to design them.

草榴社区 plays an integral role in facilitating the development of high-performance computing (HPC) SoCs for cloud vendors, so we have a front-line perspective on what these designers require from an EDA perspective to produce chips for HPC workloads. We have made significant progress in optimizing some key applications to run efficiently in the cloud, and while there have been encouraging steps and some notable successes in the adoption of hosted chip design, there is a perfect storm of drivers that may well accelerate the trend towards an SaaS model for EDA tools in the coming year.

Chip Design Abstract with Cloud Icon

What’s Driving the Adoption of Cloud for Chip Design?

The success of the cloud model in other industries has not been lost on the traditionally conservative semiconductor sector. Seeing Fortune 500 companies and renowned research institutions trust their most important sales, HR, finance, engineering, and other critical operational information to the cloud is now convincing chip design decision makers that it can work for them, too. In fact, the granular telemetry, monitoring, and control offered by cloud providers to the resources supercedes the security mechanisms that corporations have deployed in their on-prem data centers. So, while security is always a concern that no one wants to diminish, it’s becoming less of a top-tier issue.

Use cases requiring HPC in scientific research, medical research, finance, and energy prove that when flexible access to the most powerful resources is needed, it isn’t always necessary, or even optimal, to have all the horsepower on-prem. Many of the most compute-intensive applications can be run just as efficiently at safe and secure data centers, easing the administrative overhead and providing increased flexibility for design teams. And we’ve seen notable success with our own customers with cloud-optimized tools for digital implementation, library characterization, sign-off, custom layout, and physical verification. Through our own experience, we have seen that the flexibility and elasticity of cloud-based computing works on many levels.

A convergence of key occurrences is driving adoption of the cloud for chip design:

  • Systemic complexity of hyper-convergent integration along with scale complexity of Moore’s law demand hyper-convergent design flows, which, in turn, require exponentially more compute and EDA resources
  • Cloud service providers have scaled HPC-optimized infrastructure availability, affordability, and capacity to handle these workloads
  • AI, which is being used in more design flows and design tools, has a natural multiplicative effect on the first two scenarios

There’s been a significant shift in semiconductor companies’ confidence and trust in a shared or managed model for computing and managing the resources needed. Improvements in security posture, identity management, and other infrastructure enhancements have motivated engineering teams and executives to cross the chasm and adopt a more flexible and cost-effective approach to supporting their engineering efforts.

And this being the semiconductor industry, a primary driver is scale and performance. With chips and systems getting larger and more complex, access to more computing resources is an almost insatiable need. Setting up and managing farm after farm of servers in-house is impractical, if not impossible, for some, especially if the need for such resources is cyclical – which is the case in most chip design-verification processes. Tapping into extra horsepower only when needed gives additional flexibility and nimbleness to even the largest companies, not to mention a more economically efficient approach for fast-track start-ups.

The cloud provides any size company and any application with the flexibility to scale design and verification capabilities as needed in a secure environment. In the case of chip design, teams get access to the most advanced compute and storage resources, reduce their own system maintenance costs (or even eliminate them altogether), and enjoy more flexible use-based models that support the burst usage periods common in some phases of the chip design process. 草榴社区 contributes a robust suite of security tools to mitigate risk in cloud-based environments.

Sunrise Over the Clouds | 草榴社区

Enabling More Collaboration and Flexibility, Plus Greater Efficiency

No matter what industry you are in, the last two years have changed the traditional work model, perhaps permanently. Today’s work-from-anywhere reality is actually an extension of something tech companies in general, and electronics and chip firms particularly, have been comfortable with for some time. Collaboration has always been a cornerstone of success in electronic design, and the top companies have long ago embraced a dispersed workforce model – allowing them to tap into centers of excellence and skill sets on a global basis and keep projects running around the clock.

Adopting third-party managed cloud-centric models only strengthens that approach, providing in some cases a more optimized infrastructure to allow teams to share information, work collaboratively, and have access to shared resources they need at the moment, supporting a flexible peak usage model. This is especially true in managing hardware/software design simultaneously, which is a must in today’s system development world.

In most cases, the cloud-based model will work best for chip design in areas where the underlying need for compute resources is a bottleneck. This could be in large-scale verification projects, for example, where brute force computation takes precedence over human design expertise and skills. In addition, compute-intensive tasks—such as power estimation and noise analysis and formal verification—can be broken up into smaller parts across massively distributed compute and storage resources enabled by a cloud approach. This white paper explores the efficiency benefits of a distributed computing model for design verification, for example. On top of taking advantage of flexible access to powerful computing, design collaboration tasks can be optimized for cloud implementation, such as large regression test sharing or shared libraries.

Adapting EDA tools to the cloud is similar to the transition when EDA tools were optimized for new multithread and multi-processing architectures. Cloud-enabled distributed computing and distributed storage are based on similar underlying principles, and we will see more EDA tools optimized for this use model.

Making the Model Work for Users and Businesses

Ensuring the highest level of security and performance are top priorities for considering the cloud, but the model for chip design has to make sense both financially and from a user experience standpoint.

Take data transfer speed, for example. Cloud storage must support accelerated data transfer, allowing seamless data transfer processes that don’t require additional effort or technology to move data in and out of the cloud. The onus is on the tool supplier to optimize its solutions to leverage fast data transfer options to ensure an optimal user experience.

While system developers and chip designers are used to working with a collaborative model, some nuances for a cloud implementation need to be considered. EDA tools specifically need to be optimized for this type of use. The user experience is critical, and there must be no difference between an in-house hosted solution and one running in the cloud.

From a business point of view, more flexible pricing models that provide peak need access are becoming more of a reality in an industry traditionally accustomed to longer term licensing agreements – monthly, even hourly, schemes are possible – and desired – for certain applications.

Cloud-Based Chip Design Becomes a Reality

With much progress being made on the security, performance, business model, and user experience of how chip design can be supported in the cloud model, we expect to see broader adoption for this approach in the coming year.

The combination of large computational demands and the relentless push for shorter design and verification cycles are driving the need for new methods not just in the tool and design methodologies themselves, but also in how development teams access and manage the resources required to leverage their potential. Cloud-based methods will be a viable, if not preferred, option as we continue to push forward on innovation in chip and system design.

Continue Reading