Arm Newsroom Blog
Blog

Approaching Exascale: Arm-based Supercomputers Lead the HPC Charge

Simon McIntosh-Smith, Professor of High-Performance Computing at the University of Bristol, discusses what the Arm-based Isambard supercomputer—and its peers—could help us achieve in the years ahead
By Simon McIntosh-Smith, Professor in High Performance Computing, University of Bristol

2020 was a monumental year for supercomputers, even by the standards of the huge progress that has been made in high-performance computing (HPC) since 1990.

Highlights such as the coming online of the Fugaku supercomputer in Japan, as well as the urgency and scale of the effort to understand and respond to Covid-19 using HPC, demonstrated just how much might be achieved in coming years.

And as Head of the High-Performance Computing (HPC) research group at the University of Bristol, I watched as our Arm-based Isambard supercomputer came into its own in 2020.

The Marvell ThunderX2 Arm-based system went live in 2018 with 10,500 Arm cores and we’ve been astounded by how successful the system has been. Researchers, forecasters and development companies have successfully harnessed the power of so many co-ordinated processors in areas from aviation engineering to analyzing the spread of Covid-19 through society. ​​​​​​​

Now, in the recently launched Isambard 2, we have 72 Fujitsu A64FX processors—the same processors used in RIKEN’s Fugaku—and at last count have over 430 registered users doing real science on the Isambard system. The Met Office uses the system for weather and climate modelling, researchers from multiple disciplines book millions of core hours for their individual projects and developers are carrying out software tests.

From peta to exa

A petaflop, representing one thousand trillion operations a second, is incredibly fast computing. The 400 petaflops achieved by the Fugaku supercomputer this year is a major milestone in the roadmap towards an even faster landmark in HPC: the exascale era.

Work towards an exaflop (equivalent to 1000 petaflops) supercomputer is well underway and we could see a system achieving this level of processing speed as early as the end of 2021 or 2022 in the USA.

Achieving the next-generation supercomputer is something of an international competition, with authorities looking to apply ever more powerful computing to matters as diverse as national security – the modelling of nuclear weapons – and industrial competitiveness by enabling new innovations to be tested. As such, similar plans towards exascale are afoot in China, Japan and Europe.

Progress in capability and impact isn’t only a question of rising speed. While not at this level of performance, the success of the Isambard project is testament to how usable supercomputer systems now are. This a result of the groundwork laid down by the Mont-Blanc project, and more recently the work Cray (now part of Hewlett Packard Enterprises) has done to produce a complete, robust software stack for Arm in HPC.

The ThunderX2 CPUs in Isambard demonstrated how performance-competitive Arm processors could be, but the Fujitsu A64FX CPUs in Isambard 2 are a real step up. To complement these new processors, Isambard 2 includes the latest version of HPE’s Cray software, which includes their compiler, the Arm compiler, GNU tools, and in the future hopefully the Fujitsu compiler too.

Our users are also able to take advantage of the features included within the Arm processors. The scalable vector extension (SVE) features offer tremendous opportunities for optimizing code to achieve the highest performance.

HPC is a very broad discipline, from modelling the meta-materials that could be used to store hydrogen in vehicles to the simulation of how the Covid-19 virus behaves at a molecular level. What exascale processing offers is a way to manage more complexity to add more layers of science, and there are many areas identified where this leap in speed could unlock greater impact.

It’s safe to say that exascale processing will change the course of the next 30 years, or at least until we’re able to reach zettascale: 1000 exaflops.

The extra of exascale

Big projects across Europe have a well-defined path for working out what parts of their modelling and simulation activity it will be possible to ‘turn on’ when we have exascale supercomputers. The study of weather and climate change are some of the main areas able to exploit this next generation. ​​​​​​​

At the moment, when we forecast the weather or the impact of climate change we can divide the earth up into a grid as small as 10km by 10km to map weather and impact. With greater computing power this monitoring and modelling can go to a much finer resolution, say to 1km by 1km, proving a huge leap in accuracy and creating incredibly high-resolution simulations of the world around us.

Without the modelling available using supercomputers, our understanding of the changing climate would be far less than it is today. With exascale computing, it would be possible to model the effect of an individual cloud or the climate in a single valley.

This level of simulation and modelling opens up new capability to save lives and prevent millions in damage through accurate and specific findings. For climate modelling, it creates more concrete and localized findings that can be used to encourage action through greater certainty of the impact the changing climate would have, as well as the impact of our actions.

The ASiMoV project involving the University of Bristol and Rolls-Royce can currently create simulations for part of a jet engine, for example using computational fluid dynamics to model the flow of gasses across the fan blades within an engine. We estimate that to simulate an entire engine – monitoring stresses and strains, friction, electromagnetics and combustion, all to sub-millimeter accuracy – is an exascale problem.

That level of simulation will enable novel materials and approaches to be accurately tested, not just in aviation, but in other areas of engineering, and has the potential to generate far more efficient and greener engines.

New model drugs

This year, supercomputers around the world have been harnessed in the fight against Covid-19. Far faster than has been possible in the past, we were quickly able to capture the structure of the virus, with supercomputer modelling used to explain how the virus was able to latch onto cells in the body and inject its DNA.

One reason scientists have been able to explore the possibility of a vaccine so quickly is that they can test therapies, drugs and antivirals in a molecular model, providing a huge head start on drug discovery and patient testing. With the help of HPC, there has never been a virus so thoroughly understood before. ​​​​​​​

In the future, HPC will enable the understanding of a virus at an atomic level which can be used to automatically design an optimal drug for a given task. Once you have a molecular picture of a virus, an algorithm coupled with experts in the given area of medicine might be able to design the correct drug from first principles, rather than look through a library of potential drugs.

There is a lot more science to do on this but we are moving towards rational drug design, rather than today’s drug discovery. This could radically reduce the time required to create medication for viruses, down to as little as a few months.

Super efficiency

As we push the processing speed and power of supercomputers on to tackle these challenges, we face our own major challenge over how this is powered. What is remarkable about Fugaku is how efficient it is, thanks to the Arm architecture. There are some very specialized systems that are more energy-efficient, but there is no general-purpose supercomputer at that level that operates anywhere near so efficiently. ​​​​​​​

However, at 20-30MW of power, we need to ensure that we continue to create processing that can be powered sustainably and operate as efficiently as possible.

As we move into the era of exascale, how we power such complex simulation and modelling remains a huge question. The nature of that progress is also open to debate. If we reach 1000 exaflops we will have reached the zettascale of computing speed. Whether we can get there with modern technology – when we may be so close to the end of Moore’s Law ­– is not clear. At the same time, necessity is the mother of all invention, so what may seem insurmountable today may only prove the motivation for these barriers to be overcome.

Looking to the future

As Arm turns 30, it’s hard not to wonder what the next 30 years will bring. By 2050, we may have far more complete models of how the human body works, leading to rational drug design and shorter human trials. We could also see personalized medicine and vaccinations based on an individual’s genome, made possible by supercomputer modelling. ​​​​​​​

It is possible that by 2050, jet engines will be entirely designed—and achieve certification—in virtual simulations, with the first physical working engine going straight into production. In the exascale era, our ability to monitor and forecast the earth’s climate will be 1000 times more powerful.

The chips we use today with 100 Arm cores by 2050 could contain thousands of cores and successors to SVE will have been invented. Then there are those innovations such as optical interconnects, where light replaces electrons as the method for sending messages. The radical potential in the processing power of the chip remains.

What will enable these developments or when they will be possible may not be clear. What is clear is that the researchers and companies are innovating to make them happen, while the supercomputer continues to work at the heart of the scientific breakthroughs that could enable humanity to overcome some of its greatest challenges. ​​​​​​

Groundbreaking Performance and Power Efficiency for High-Performance Computing (HPC)

Arm HPC solutions, including Arm Neoverse, address the needs of the HPC community today and for the future. Arm’s consistent architectural advancements give designers the freedom and flexibility to innovate in supercomputer design. Discover how Arm technology meets the demands of the most challenging HPC applications.

Article Text
Copy Text

Any re-use permitted for informational and non-commercial or personal use only.

Editorial Contact

Brian Fuller and Jack Melling
Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm

Latest on Twitter

promopromopromopromopromopromopromopromo