草榴社区

AI Startups are Using the Cloud to Accelerate Chip Design and Time-to-Market

草榴社区 Editorial Staff

Nov 18, 2024 / 4 min read

It’s no surprise that AI technologies — and the chips that enable them — are in massive demand. The processor and accelerator market for AI applications is expected to reach , representing more than 263% growth over a five-year span.

What may come as a surprise, however, is the number of new businesses looking to capitalize on this growth. Despite significant barriers to entry, .

Because these new entrants typically don’t have the infrastructure or resources of traditional semiconductor companies, many are using non-traditional approaches to innovate and compete. And much of it comes down to speed.

There’s no better way to level the playing field than beating the competition to market, and many AI chip startups are using cloud-based tools to accelerate their design cycles.

Here are three examples. 

Rain AI develops novel AI accelerator chip from scratch

A Series A company backed by AI pioneers like and , Rain AI set out to develop the world’s most energy efficient AI hardware. Their novel AI accelerator necessitated full System-on-Chip (SoC) architecture exploration (based on real world workloads). It required tight integration between digital, analog, and RISC-V components. And it needed to be designed and developed from scratch.  

“Rain AI is on a mission to redefine compute for AI workloads and is designing AI accelerator chips for record balance between speed, power, area, accuracy, and cost,” says JD Allegrucci, Head of Hardware Engineering at Rain AI. “Designing these complex architectures requires best-in-class EDA tools and IP.”

With a goal of turning concept into hardware within a year, speed and first-pass silicon success were also critical. Rain AI achieved both with the help of 草榴社区 Cloud, which combines SaaS-based EDA tools and IP with pre-optimized infrastructure from cloud leaders like .

“We were able to set up the EDA environment within days and start the architecture exploration — for AI workloads, design, verification, and backend flow — within a few weeks,” says Nawab Ali, Principal Engineer at Rain AI. “It allowed us to focus on our design rather than setting up and managing the environment.”

Fast infrastructure setup and architectural exploration coupled with increased engineering productivity enabled Rain AI to successfully meet their fast-track development goals.

“草榴社区 Cloud provided unique capabilities to handle iterative and complex design cycles with unlimited license scalability to accelerate our entire project and deliver high-quality results within schedule,” Allegrucci says.

rain ai accelerator chip

Mentium overcomes infrastructure, licensing, and budget constraints

With a critical development milestone looming, AI startup needed to scale their compute and storage resources and overcome EDA licensing and budget limitations.

“We were at risk of missing a tapeout,” says Mirko Prezioso, Cofounder and CEO of Mentium, which is developing edge-focused, AI-driven co-processors for space, robotics, and security applications.

Without time or CapEx for more infrastructure and additional licenses, the company turned to 草榴社区 Cloud and Microsoft Azure.

“We were amazed to see the capabilities of 草榴社区 Cloud,” Prezioso says. “We were able to set up the entire CAD/IT environment quickly and scale EDA licenses on a per-minute basis for each job.”

Released from the constraints of onsite infrastructure, seat-based licensing, and the upfront spending typically required for both, Mentium was able to stand up a production environment in days and get its design schedule back on track in weeks.

“We were able to deliver our first tapeout on time and worked on pulling in the schedule for the next tapeout,” Prezioso reports. “With an intuitive UI/UX coupled with on-demand compute, storage, and true pay-per-use access, 草榴社区 Cloud allows us to harness the full potential of the cloud for EDA workloads to complete designs faster and with improved quality.”

mentium ai chip startup

TetraMem aligns global development team and AI chip verification processes

Seeking to develop a novel AI accelerator with cutting-edge in-memory computing (IMC) technology, faced three distinct challenges. First, the accelerator’s complex design required smooth integration and verification of analog in-memory computing and the digital RISC-V processor. Second, the startup’s development team was spread across multiple continents. And third, the company had aggressive time-to-market targets.

EDA tools and preconfigured design flows from 草榴社区 Cloud and optimized infrastructure from Microsoft Azure helped overcome all three.

“We were able to achieve a very fast infrastructure setup on the 草榴社区 Cloud EDA environment within days,” says Wenbo Yin, VP of IC Design at TetraMem. “The vast selection of EDA tools and IP available on the cloud enabled us to start the design, verification, and backend flow very quickly.”

In addition to enabling the rapid deployment of a full production environment, the cloud-based solution dramatically improved the collaboration and efficiency of TetraMem’s globally distributed design team.

“草榴社区 Cloud has played a pivotal role in helping us accomplish our mission by providing seamless access to a highly secure and complete SoC design environment,” says Dr. Glenn (Ning) Ge, CEO and Cofounder of TetraMem. “With scalable access to EDA software, preconfigured end-to-end flows, and infrastructure resources, we can tapeout quickly and efficiently.”

tetramem ai neural networks

Continue Reading