What Is Exascale Computing?

These warehouse-sized supercomputers may lead the way to more personalized medicine and sustainable climate action.

Written by Brooke Becher
Published on Mar. 26, 2024
What Is Exascale Computing?
Image: Shutterstock

Exascale computing is a type of high-performance supercomputing that can solve one exaflop — or one quintillion operations — per second. In terms of processing power, exascale computers are the most powerful machines in the world.

What Is Exascale Computing?

Exascale computing is the world’s fastest computational standard, in which massive, high-performance machines analyze massive data volumes at a rate of one quintillion operations per second. 

“If every person on the planet performed one calculation per second, it would still take over four years to accomplish what an exascale computer can do in a single second,” Lucas Ochoa, an ex-Google artificial intelligence engineer and former Microsoft product designer, told Built In.

Using advanced modeling and simulations, these hefty hardware systems crunch vast data volumes to provide predictive analysis about national security, precision medicine and climate research that unlock “new competition in the realms of science and computer engineering,” Ochoa said.

 

What Is Exascale Computing?

Exascale computing refers to computing systems capable of performing one quintillion calculations per second, known as an exaflop. That’s 18 zeros, or a billion billion calculations. These systems run on highly parallel software algorithms and require robust hardware architecture — made up of thousands of CPUs, GPUs and nodes — that’s so massive it must be stored in warehouse-sized buildings.

Take for example Frontier, the world’s first supercomputer to break the exascale. Operating out of Tennessee’s Oak Ridge National Laboratory, the 7,300-square-foot system debuted in 2022, and is made up of 74 cabinets with more than 9,400 nodes, 10,000 CPUs and 38,000 GPUs. It runs a peak performance of 1.6 exaflops.

This extraordinary processing power allows exascale computers to analyze massive amounts of data at rapid speeds in order to simulate real-world phenomena and essentially figure out how everything works. One exascale machine could outperform the combined effort of today’s top one hundred petascale supercomputers, as the Lawrence Livermore National Library puts it.

Related Reading17 High-Performance Computing Applications and Examples

 

What Is Exascale Computing Used For?

Researchers use exascale computing to break ground in everything from nanoscience and national security to precision medicine and climate research.

Typically, these multi-million-dollar machines are owned by government entities or large conglomerates, so scientists must submit grants in order to gain access to these systems before they can use this tech to unlock some of the universe’s biggest mysteries. They want to know how stars explode, how various drug molecules interact with the body, how fuel is injected inside of an engine or how to evaluate the performance and safety of nuclear weapons without nuclear explosive testing, said George Amvrosiadis, an associate research professor of electrical and computer engineering at Carnegie Mellon University.

Amvrosiadis, who is also a member of the Parallel Data Lab, told Built In that exascale computing “allows us to execute complex simulations and analyses that would be too costly, dangerous or even impossible to conduct in real life.”

Indeed, exascale computing delivers a gateway to previously unattainable knowledge with better, faster science at scale, calculating a way to treat incurable diseases, produce clean, carbon-negative energy and reverse climate change solutions.

 

How Does Exascale Computing Work?

It’s easiest to think of exascale computers as a bunch of smaller networked computers working as one. Each of these smaller computers, known as nodes, share access to rapid data storage and come equipped with their own set of processors that solve complex problems by breaking them down into smaller parts. So when a task is inputted into an exascale system, it's broken up into smaller pieces that are then distributed among thousands — or even millions — of processors working concurrently in a process known as parallel computing.

For exascale computing to work, each processor, either a CPU or GPU, is assigned a specific job to complete. They then communicate with each other through a high-speed, interconnected network, ensuring that they’re all working toward the same goal. To prevent disruptions, exascale computers have built-in fail-safe systems to handle errors and keep running smoothly even if some parts fail.

Parallel processing enables intricate, large-scale systems to balance their computational power for efficient performance and optimal energy usage.

Related ReadingIs the Human Brain a Computer?

 

Advantages of Exascale Computing

Computational Power

By definition, exascale computers are built with unprecedented power. Frontier, the machine that first broke the exascale, is more powerful than the top seven supercomputers combined, with a peak performance of 1.6 exaflops that could theoretically surpass 2 exaflops. These machines will reign supreme until the next generation of zettascale technology takes over — potentially within a decade.

 

Performance

Exascale computing systems are built with size in mind, not just efficiency. Using parallelism, exascale computers split one task across thousands or millions of processing units that concurrently work on different parts of a problem. This method optimizes the use of computational resources for faster task execution.

Ochoa, who is also the CEO and founder at AI-startup Automat, said that exascale performance capacities enable the construction of AI models that are about four-and-a-half times faster and eight times larger. This capacity “allows for training on more extensive data sets, enhancing predictability and accelerating the time-to-discovery process,” he added.

 

Speed

Exascale computers outperform the human brain one quintillion to one. Since Frontier’s debut in 2022, exascale computers have set a new pace for computational speed at one exaflop per second, which is one thousand times faster than the top petascale computers.

 

Scientific Discovery

With the ability to create better simulations and predictive analysis at unimaginable speeds and scale, exascale computers facilitate groundbreaking discoveries by providing valuable insights that are otherwise impossible to achieve. By modeling natural processes and unobservable phenomena, scientists and researchers get one step closer to cracking the codes of fundamental principles that shape the world around us.

 

Complexity

Exascale computing empowers researchers to address some of society’s biggest challenges, such as climate change, disease prevention, renewable energy optimization, and urban planning. It provides the computational resources needed to model, simulate and optimize complex systems.

Related ReadingWhat Is Grid Computing?

 

Disadvantages of Exascale Computing

Power Consumption

Due to their grand scale and high-performance parts, exascale systems require an enormous amount of power to operate. So much so that they feature built-in liquid-cooling systems. The world’s fastest supercomputer produces 20 megawatts per exaflop, at 22.7 megawatts total. That’s enough to power an American small town or a jet engine.

 

Data Movement

Exascale systems consist of many moving parts working independently in parallel, but as one. In the interest of peak performance, minimizing data movement and streamlining communication between networked nodes and processors is crucial. Given their sheer size, these larger-than-life systems won’t work without efficient data transfer rates, latency and bandwidth.

“Exascale systems will deploy so many components that it’s improbable for the entire system to operate without issues at any given time,” Ochoa said. “The hardware, system software and applications must be capable of handling both minor malfunctions and major failures.”

 

High Cost

Building and operating exascale computing facilities comes with a hefty price tag, which is why they are largely owned by government entities or conglomerates. In addition to ongoing operational costs, they require specialized hardware, infrastructure and energy requirements. The initial investment alone disqualifies most organizations from exascale computing. So while it took $600 million to physically build Frontier, there’s an additional cost of $100 million per year to run the machine. This is just one of three exascale projects announced by the Department of Energy, collectively budgeted at $1.8 billion.

 

Security 

To a cybercriminal, each node of a networked network system is a window inside. So while the thousands of interconnected nodes that make up an exascale computer may increase its computational performance, they also contribute to a sizable attack surface that increases their vulnerability to security threats, including cyberattacks, data breaches and unauthorized access.

 

A brief overview on the era of exascale computing. | Video: Lawrence Livermore National Laboratory

Exascale Computing vs. Quantum Computing

Exascale Computing

Exascale computing is essentially classical computing at scale. It uses the same coding that runs our smartphones, laptops and PCs, then builds it out using a maximal amount of hardware.

“Our everyday computers cannot solve large-scale scientific problems that involve immense amounts of data or must be solved before a tight deadline,” Omer Subasi, a computer scientist at Pacific Northwest National Laboratory, told Built In. “This is why we need exascale supercomputers.”

Classical computations use binary logic, where units of information are represented in “bits” and can be of two states: 1 (True) and 0 (False).

 

Quantum Computing

Quantum computing is a fundamentally different style of computation that leverages quantum mechanics and is capable of simulating atomic and subatomic phenomena. Quantum computing uses “qubits,” which may represent an infinite number of superimposed states. So while traditional methods have to carry out a single path per result, quantum computers use a larger workspace to explore multiple paths simultaneously.

Quantum computers are expected to surpass any conventional binary-code machine in terms of complexity and speed. They also differ from traditional supercomputers in that they solve different types of questions specific to fields like chemistry, material science and physics, Subasi explained.
 

Frequently Asked Questions

Exascale computing processes massive volumes of data at hyper speed to create predictive analysis and better simulations that are used to facilitate scientific breakthroughs and solve some of society’s most pressing issues. These insights can be applied to create more accurate disease modeling for precision medicine, bolster national security, discover energy production alternatives and deter climate change.

Yes; The world’s first exascale computer, Frontier, debuted in 2022. Operating out of Tennessee’s Oak Ridge National Laboratory, the supercomputer has a peak performance of 1.6 exaflops with a theoretical reach of 2 exaflops.

Exascale computing exceeds one exaflop, meaning that these supercomputers perform one quintillion (or one billion billion) calculations per second. 

Hiring Now
Arrive Logistics
Logistics • Sales • Software • 3PL: Third Party Logistics
SHARE