草榴社区

The Future of High-Performance Computing: Key Predictions for 2022

草榴社区 Editorial Staff

Jan 19, 2022 / 7 min read

Last year, we wrote extensively about how the COVID-19 pandemic was affecting the high-performance computing (HPC) and data center industries in real time. While everyone learned to master the art of working and studying remotely, the demand for more computing power and less latency pushed the growth trajectory for the HPC market even further than experts would have predicted back in early 2020.

As the dust has settled and we’ve all gotten used to the “new normal,” we’ve become accustomed to new ways of work, methods of learning, and approaches to social interaction in general. This overall flexibility and advanced approach to data processing and data sharing are going to continue into next year, making everyone more productive, information more accessible, and collaboration more seamless. As a result, we will continue to see the HPC market evolve and expand, with more industries requiring more interconnected silicon architectures as well as high-speed networking.

Read on for our 草榴社区 experts’ top predictions on which new markets HPC will enter this year, the increasing importance of security, and the evolving architecture that will support HPC applications.

3D illustration of server room in data center

HPC Will Move into New Markets in 2022

Last year, we saw HPC being used to create vaccines that protect against COVID-19. We’ll continue to see it being used in medical research and monitoring, but HPC will also extend into even newer markets in 2022.

 

Scott Durrant, strategic marketing manager, 草榴社区 草榴社区 Group

“We’re seeing an increase in the number of catastrophic climate events in the United States and throughout the world, and it’s becoming more important to be able to forecast those to protect people from them. That’s an application that is going to have a lot of focus in the HPC space in the coming year,” Scott Durrant, strategic marketing manager, 草榴社区 草榴社区 Group, said. “In addition to that, we’re going to see a lot more use of HPC for consumer-oriented applications, driven by the availability of HPC in the cloud. Historically, high-performance data centers have been isolated and only available to research organizations, the government, and companies that have very large budgets. We’ll start to see development of virtual worlds or the “metaverse” as it’s recently starting to be called, both for recreation, like gaming (augmented reality and virtual reality), as well as for simulations like digital twins.”

 

Ruben Molina, director of product marketing, Silicon Realization Group

“You can make the case that every couple of years what used to be thought of as HPC becomes mainstream,” Ruben Molina, director of product marketing, Silicon Realization Group, said. “I predict that HPC at the edge is going to be more of the rule than the exception. The industrial sector is going to utilize HPC for applications in robotics, vision systems, and preventative maintenance and monitoring, such as pre-determined or predictive failures on assembly lines — essentially, all industrial areas that need computing power right where the devices are being employed in order to reduce downtime needs.”

 

Susheel Tadikonda, vice president of engineering, System Design Group

“The HPC market is expanding with new types of work, adding artificial intelligence (AI) and data analytics to traditional simulation and modeling. The rise of COVID-19 has emphasized the need for flexible and scalable HPC solutions in the cloud,”  Susheel Tadikonda, vice president of engineering, System Design Group, said. “This along with the increasing need in various industry verticals (life sciences, automotive, finance, gaming, manufacturing, aerospace, etc.) for faster data processing with higher levels of accuracy will be a major factor driving the growth of HPC adoption in the coming years. Technologies such as AI, edge computing, 5G, and Wi-Fi 6 will broaden the capabilities of HPC, leading to new chip/system architectures that will deliver high processing and analytical capabilities to various sectors.”

Increasing HPC Security Will Become Vital for New Designs

The amount of data that will be processed next year will increase exponentially, and so will the value and sensitivity of that data. Making sure that security is an essential component (rather than an afterthought) when designing for HPC components will be one of the top design challenges that engineers will face this year and every year moving forward.

HPC systems contain highly customized hardware and software stacks that are tuned for performance optimization, power efficiency, and interoperability. Designing and securing such systems with their own use modes and distinctive components/attributes makes it different from other types of general-purpose computing systems,” said Tadikonda. “Security threats are not just limited to network/storage data compromise, but also include side channel attacks like data pattern inference from power states, emissions, and processor wait times. We will see a lot more innovation around memory and storage technology, intelligent interconnects, silicon-enabled security, and cloud security to efficiently manage massive data volumes. Security verification/validation will represent one of the most critical parts of security assurance, spanning the architecture, design, and post-silicon components of the system lifecycle.”

“There’s a significant increase in the importance of securing information, protecting the confidentiality and integrity of data, and providing access controls to data. We’ve seen over this past year the sorts of problems that can occur from ransomware and other cyberattacks,” said Durrant. “There will be an increasing number of attacks as there’s a greater value of the data in the infrastructure, and so providing security from the hardware upward through all levels of the stack to protect that information is going to be more and more important.”

“The zero-trust framework will also become more adopted. This means that people coming in and wanting access to data need to validate who they are and prove that they are authorized to access the data. We expect that will ramp up even more over the next year or so. In fact, we’re already seeing the underpinning of some of the necessary hardware,” Scott Knowlton, director, strategy & solutions, 草榴社区 草榴社区 Group, said. “Additionally, we’ll see embedded roots of trust in each of the elements within an infrastructure. It allows them to be able to authenticate one another and ensure that before data is shared with another device, the device is authorized to utilize and process that data.”

“The more digitization that happens across many markets, the more opportunities there are for security risks,” said Molina. “Because increased high-performance computation is moving farther from the data centers, there’s going to be an increasing number of opportunities for attacks that can’t be completely mitigated with software patches. This is going to put a lot of pressure on design teams to rush out hardware to solve these problems which will result in accelerated hardware design cycles. Increasing designer productivity to keep pace with time-to-market demands is going to be a critical need.”

The Processor Architecture Necessary to Support HPC Applications Will Become More Varied

As the amount of data increases, it’s not just security that needs to be considered. Storage infrastructure will have to increase as well as the compute capability to process that data. New architectures, including 3DIC and die-to-die connectivity, are necessary to facilitate the latest requirements.

“The HPC architectural landscape is going through a seismic shift and the driving factors for this change are evolving AI workloads, flexible computing (CPU, GPU, FPGA, DPU, etc.), cost, memory, and IO throughput. Progress at a microarchitectural level includes faster interconnections, higher computing densities, scalable storage, greater efficiencies in infrastructure, eco-friendliness, space management, and improved security,” said Tadikonda. “From a system perspective, next-generation HPC architectures will see an explosion of disaggregated architectures (decoupling memory from processors and accelerators) and heterogenous systems, where different specialized processing architectures (FPGA, GPU, CPU, etc.) are integrated in a single node allowing flexible switching between modules at a fine-grained scale. A key recipe to achieve this kind of integrated system is the use of “chiplets.” Such complex systems pose a big verification challenge, especially from IP/node-level verification in system context, dynamic hardware-software orchestration, workload-based performance, and power, etc. This will require a push for novel hardware-software verification approaches.”

“One of the challenges that system managers face today is that moving data around takes a lot of power and time (both of which are in limited supply). Moving the processing closer to the data to reduce the amount of data movement that takes place will be a trend that we’ll see accelerate in 2022,” said Durrant. “Along with that is the need to continue scaling resources. One of the mechanisms that I think we’ll see really advance in the coming year is the utilization of advanced packaging and die-to-die interfaces to support higher performance devices, that is the scaling of processing capabilities within a device by use of multiple die.”

“In addition to reducing latency by moving data closer to the processing elements, multi-die integration also allows compute power to scale by combining multiple-die in a single package without the cost of bleeding-edge process technologies,” said Molina. “To make this happen, designers need the ability to floor plan, route, and analyze the timing and power of multiple chips inside the package. One other method for scaling compute power is the customization of compute architectures for specific tasks. Companies are already starting to do this for network processors and graphics applications, but it takes a lot of upfront architectural exploration to get it right in the RTL and that is putting a lot of focus on tools that can enable these tradeoffs early in the design cycle.”

“We’re also seeing disaggregation of the architectures. Architectures like 3DIC are becoming key to enabling designers to put different dies and packages together to handle specific computational paths,” said Knowlton. “So now they can design packages using 3DIC and die-to-die connectivity, and then extrapolate that out from that particular component into the machines where we are seeing disaggregation of the memory systems. This gives us unique opportunities for different types of designs and architectures to handle specific workflow tasks.”

Stay on the lookout for additional predictions posts that will outline our 草榴社区 experts’ 2022 insights for more key application areas.

Continue Reading