Cloud native EDA tools & pre-optimized hardware platforms
Unlimited access to EDA software licenses on-demand
.
In a year where we witnessed the historic signing of the CHIPS Act, came to terms with lingering pandemic influences on the workplace, and weathered chip supply disruptions, the one thing that has not changed is the meteoric rise in cloud adoption and migration. Cloud computing continues to be a key enabler of many life-changing technologies, including artificial intelligence (AI), high-performance computing (HPC), and building the metaverse responsibly. If anything, more organizations and governments across the world became comfortable with the idea of running their workloads on cloud in 2022.
To overcome systemic complexity and performance constraints, moving to the cloud is no longer considered optional and has proven to be vital for business agility, especially in a challenging economic environment. According to Gartner, there’s no sign of slowing down, with global public cloud spending to reach nearly $600 billion in 2023, up from $490.3 billion in 2022. Consequently, a growing number of OEMs and Tier 1s that build HPC chips have recognized the scalability and elasticity advantages of utilizing the cloud for chip design needs and workloads.
草榴社区 is at the forefront of understanding chip designers’ needs from an electronic design automation (EDA) perspective to design and verify advanced chips driving HPC workloads increasingly moving to public cloud infrastructure. We have played a key role in aiding the development of systems on chips (SoCs) for cloud vendors and are proud of what we have accomplished in the past year with the introduction of the industry’s first broad-scale cloud software-as-a-service (SaaS) solution, 草榴社区 Cloud.
Based on our experience of helping many customers leverage cloud for scale and flexibility and as an enabler of advanced chip designs, read on to learn about key cloud trends we see taking shape in 2023, and why cloud technology will have a greater impact going forward.
Bursting some workloads on cloud is a deployment technique that helped companies shift rapidly from traditional datacenters to the cloud whenever the demand for computing resources spiked. While this practice helps avoid service disruption, is cheaper, and addresses short-term needs (think an incomplete project schedule or a complex EDA workload that needs immediate capacity to run successfully), comprehensive peak loads management has been a prolonged battle.
Current bursting approaches and methodologies are not addressing this challenge adequately. While a number of customers have figured out a tentative solution to work around it, only one-third of the problem has been solved today, with challenges persisting in data management, license availability, and cloud costs. The industry needs a holistic, end-to-end solution for the present hybrid cloud environment, where teams can burst from on-prem to cloud seamlessly, derive needed results, and continue to deploy it, especially for expanding HPC requirements. Balancing efficiency and cost optimization will become more relevant. Going into 2023, we see this domain becoming a larger focus, with the likelihood that a fool-proof solution will see the light of day.
The concept of “EDA on the cloud” will change the way teams manage, schedule, and design EDA projects. We envision that design teams will go through a learning cycle of how to optimize and control their cloud spend — similar to how other industries went on their cloud journey in the past five to eight years. Based on what we at 草榴社区 have seen in the last nine months alone, customers have never been keener to move their EDA workloads to the cloud.
This industry’s progression from a “crawl, walk, run” mindset is a testament to the value that companies are seeing in advanced compute and flexible storage infrastructure to do their jobs faster. Some of the key reasons for this point to how companies are running out of capacity on-prem and later realize that augmenting on-prem capacity is an arduous process. While it is not necessarily more expensive in terms of dollars, it is costly in terms of the time involved.
With time to market becoming a critical criterion, customers realize that immediate, short-term needs can be solved much faster on the cloud compared to building and leveraging on-prem capacity. That said, standard on-prem practices such as planning annual budgets and CAPEX cycles will continue in a planned manner and be carried forward. Ultimately, user experience and supporting analytics are what will determine the pace of cloud adoption for EDA.
We anticipate that large-sized businesses that invest heavily in modern hyperscale datacenters will expand their cloud usage significantly—not just from a burst mode perspective, but also to raise capacity levels and get access to the latest hardware. For instance, some of our customers prefer to use the cloud as an extension of their existing datacenter strategy instead of upgrading their datacenters (which can require four- or five-year cycles) or opting for cloud purely for capacity peak needs.
As chips and systems increase in size and complexity, access to more computing resources is an almost insatiable need. Tapping into extra horsepower gives the largest and smallest of companies an economically efficient way to use the cloud as an expansion of their capacity, not just for peak needs.
For 草榴社区, it presents an opportunity to develop a comprehensive toolset that can scale for added capacity and help customers seamlessly transition between on-prem and cloud-use models. Understandably, different customers have different strategies on how to better manage their cloud journey, and no two companies will choose the same model. While this topic has been under discussion this year, we see it being examined in more depth in 2023.
It’s been a hard year for the economy, and while we don’t expect supply chain challenges to be put to rest completely, we envision them to be less of a bottleneck. Rather, they will motivate companies to invest more in cloud computing and capacity.
In the EDA world, one of the biggest advantages that the cloud provides for chip design is the unlimited and advanced computing resources that can deliver the capacity today’s designers need. Spot expenses that have a cost advantage will grow and customers will look for tools that can help them optimize cost while maximizing the value from tools and getting more information from models.
At 草榴社区, we believe our 草榴社区 Cloud SaaS solution, with its true pay-per-use model, will fundamentally shift the way chip design projects are done in the future. Customers who already have cloud resources through a BYOC model can also take advantage of 草榴社区 Cloud and benefit from elasticity and quicker time to market, delivering on the promise for chip design and verification on the cloud.
An age-old dilemma, verification has been the most time-consuming and resource-intensive component of chip development. As teams run more designs, no team can ever truly have enough verification; these teams need to perform comprehensive verification across a variety of tasks to reach coverage closure faster. The biggest driver of verification is coverage closure with constrained random verification. We have seen industry surveys state that 35% of verification time is spent on coverage closure, while another 35% is spent on debugging (which may have some creative applications on the cloud but likely relies on coverage).
Today, companies run verification with high utilization rates, but the desire for higher quality results and faster time to results is driving them to use cloud for both burst and sustained capacity. These requirements come with their own challenges and risk that quality-controlled verification techniques can mitigate earlier in the design process.
Additionally, with advanced processor technology and cloud providers optimizing HPC infrastructure for applications, building a robust verification process becomes a growing challenge — translating to more capacity requirements on the cloud. For example, recent advancements such as Amazon Web Services’ Graviton 2 and Graviton 3 powered by Arm processors and Microsoft’s new Azure virtual machines with Ampere Altra Arm-based processors showcase how service providers are getting microprocessors into the cloud and betting on data center infrastructure to be more energy- and cost-efficient.
As the implementation of the CHIPS Act crystallizes in 2023 and various global governments, foundries, and vendors continue to increase their semiconductor investments and build new fabs, we expect to see a burst of startups entering the playing field. While we recognize that capacity needs and design complexity will only grow and expand, it will be important for companies to unlock the full benefit of the flexibility and power of their cloud model in a way that’s affordable, scalable, and flexible. Looking ahead, we are confident that cloud computing as an industry will stand the test of time and cloud migration will become a key business investment irrespective of economic conditions.