What Is a Supercomputer and How Does It Work?

Today’s highest performing machines can solve anywhere from one quadrillion to one quintillion calculations per second.

Written by Brooke Becher
supercomputer
Image: Shutterstock
UPDATED BY
Brennan Whitfield | Aug 22, 2024

A supercomputer refers to a high-performance mainframe computer. It is a powerful, highly accurate machine known for processing massive sets of data and complex calculations at rapid speeds.

What makes a supercomputer “super” is its ability to interlink multiple processors within one system. This allows it to split up a task and distribute it in parts, then execute the parts of the task concurrently, in a method known as parallel processing.

Supercomputer Definition

Supercomputers are high-performing mainframe systems that solve complex computations. They split tasks into multiple parts and work on them in parallel, as if there were many computers acting as one collective machine.

Originally developed for nuclear weapon design and code-cracking, supercomputers are used today by scientists and engineers to test simulations that help predict climate changes and weather forecasts, explore cosmological evolution and discover new chemical compounds for pharmaceuticals.

 

How Do Supercomputers Work?

Unlike our everyday devices, supercomputers can perform multiple operations at once in parallel thanks to a multitude of built-in processors.

How it works: An operation is split into smaller parts, where each piece is sent to a CPU to solve. These multi-core processors are located within a node, alongside a memory block. In collaboration, these individual units — as many as tens of thousands — communicate through inter-node channels called interconnects to enable concurrent computation. Interconnects also interact with I/O systems, which manage disk storage and networking.

How’s that different from regular old computers? Picture this: On your home computer, once you strike the ‘return’ key on a search engine query, that information is input into a computer’s system, stored, then processed to produce an output value. In other words, one task is solved at a time. This process works great for everyday applications, such as sending a text message or mapping a route via GPS. But for more data-intensive projects, like calculating a missile’s ballistic orbit or cryptanalysis, researchers rely on more sophisticated systems that can execute many tasks at once. 

“You have to use parallel computing to really take advantage of the power of the supercomputer,” Caitlin Joann Ross, a research and development engineer at Kitware who studied extreme-scale systems during her residency at Argonne Leadership Computing Facility, told Built In. “There are certain computations that might take weeks or months to run on your laptop, but if you can parallelize it efficiently to run on a supercomputer, it might only take a day.”

 

What Are Supercomputers Used For?

Supercomputing’s chief contribution to science has been its ability to simulate reality. This capability helps humans make better performance predictions and design better products in fields from manufacturing and oil to pharmaceutical and military. Jack Dongarra, a Turing Award recipient and emeritus professor at the University of Tennessee, likened that ability to having a crystal ball.

“Say I want to understand what happens when two galaxies collide,” Dongarra said. “I can’t really do that experiment. I can’t take two galaxies and collide them — so I have to build a model and run it on a computer.”

Back in the day, when testing new car models, companies would literally crash them into a wall to better understand how they withstand certain thresholds of impact — an expensive and time consuming trial, he noted. 

“Today, we don’t do that very often,” Dongarra said. “[Now] we build a computer model with all the physics [calculations] and crash it into a simulated wall to understand where the weak points are.”

This concept carries over into various use cases and fields of study enlisting the help of high-performance computing.

Weather Forecasting and Climate Research

When you feed a supercomputer with numerical modeling data — gathered via satellites, buoys, radar and weather balloons — field experts become better informed on how atmospheric conditions affect us and. They become better equipped to advise the public on weather-related topics, like whether you should bring a jacket and what to do in the event of a thunderstorm. 

Derecho, a petascale supercomputer, is being used to explore the effects of solar geoengineering, a method that would theoretically cool the planet by redirecting sunrays, and how releasing aerosols influence rainfall patterns.

Genomic Sequencing

Genomic sequencing — a type of molecular modeling — is a tactic scientists use to get a closer look at a virus’ DNA sequence. This helps them diagnose diseases, develop tailor-made treatments and track viral mutations. Originally, this time-intensive process took a team of researchers 13 years to complete. But with the help of supercomputers, complete DNA sequencing is now a matter of hours. Most recently, researchers at the Stanford University scored the Guinness World Records title for fastest genomic sequencing technique using a “mega-machine” method that runs a single patient’s genome across all 48 flow cells simultaneously.

Aviation Engineering

Supercomputing systems in aviation have been used to detect solar flares, predict turbulence and approximate aeroelasticity (how aerodynamic loads affect a plane) to build better aircrafts. In fact, the world’s fastest supercomputer to date, Frontier, has been recruited by GE Aerospace to test open fan engine architecture designed for the next-generation of commercial aircrafts, which could help reduce carbon-dioxide emissions by more than 20 percent.

Space Exploration

Supercomputers can take the massive amounts of data collected by a various set of sensor-laden devices — satellites, probes, robots and telescopes — and use it to simulate outer space conditions earthside. These machines can create artificial environments that match patches of the universe and, with advanced generative algorithms, even reproduce it.

Over at NASA, a petascale supercomputer named Aitken is the latest addition to the Ames Research Center that is used to create high-resolution simulations in preparation for upcoming Artemis missions, which aim to establish long-term human presence on the moon. A better understanding of how aerodynamic loads will affect the launch vehicle, mobile launcher, tower structure and flame trench reduces risk and creates safer conditions.

Nuclear Fusion Research

Two of the world’s highest-performing supercomputers — Frontier and Summit — will be creating simulations to predict energy loss and optimize performance in plasma. The project’s objective, led by scientists at General Atomics, the Oak Ridge National Laboratory and the University of California, San Diego, is to help develop next-generation technology for fusion energy reactors. Emulating energy generation processes of the sun, nuclear fusion is a candidate in the search for abundant, long-term energy resources free of carbon emissions and radioactive waste.

Related ReadingHigh-Performance Computing Applications and Examples to Know

 

How Fast Is a Supercomputer?

Today’s highest performing supercomputers are able to compute simulations in the time it would take a personal computer 500 years, according to the Partnership for Advanced Computing in Europe.

In another perspective, a person can solve an equation on pen and paper in one second. In that same timespan, today’s fastest supercomputer can execute a quintillion calculations. That’s eighteen zeros.

Soumya Bhattacharya, system administrator at OU Supercomputing Center for Education and Research, explained it like this: “Imagine one-quintillion people standing back-to-face and adding two numbers all at the same time, each second,” he said. “This line would be so long that it could make a million round trips from our earth to the sun.”

A supercomputer’s high-level of performance is measured by floating-point operations per second (FLOPS), a unit that indicates how many arithmetic problems a supercomputer can solve in a given timeframe.

 

Fastest Supercomputers in the World

The following supercomputers are ranked by Top500, a project co-founded by Dongarra that ranks the fastest non-distributed computer systems based on their ability to solve a set of linear equations using a dense random matrix. It uses the LINPACK Benchmark, which estimates how fast a computer is likely to run one program or many.

1. Frontier

Operating out of Oak Ridge National Lab in Tennessee, Frontier is the world’s first recorded supercomputer to break the “exascale,” sustaining computational power of 1.1 exaFLOPS. In other words, it can solve a quintillion calculations per second. Built out of 74 HPE Cray EX supercomputing cabinets — which weigh nearly 8,000 pounds each — it’s more powerful than the top seven supercomputers combined. According to the laboratory, it would take the entire planet’s population more than four years to solve what Frontier can solve in one second.

2. Fugaku

Fugaku debuted at 416 petaFLOPS — a performance that won it the world title for two consecutive years — and, following a software upgrade in 2020, has since peaked at 442 petaFLOPS. It’s built with a Fujitsu A64FX microprocessor that has 158,976 nodes. The petascale computer is named after an alternative name for Mount Fuji, and is located at the Riken Center for Computational Science in Kobe, Japan.

3. Lumi

A consortium of 10 European countries banded together to bring about Lumi, Europe’s fastest supercomputer. This 1,600-square-foot, 165-ton machine has a sustained computing power of 375 petaFLOPS, with peak performance at 550 petaFLOPS — a capacity comparable to 1.5 million laptops. It’s also one of the most energy efficient models to date. Located at CSC’s data center in Kajaani, Finland, Lumi is kept cool by natural climate conditions. It also runs entirely on carbon-free, hydro-electric energy while producing 20 percent of the surrounding district’s heating from its waste heat.

4. Leonardo

Leonardo is a petascale supercomputer hosted by the CINECA data center based in Bologna, Italy. The 2,000-square-foot system is split into three modules — the booster, data center and front-end and service modules — which run on an Atos BullSequana XH2000 computer with more than 13,800 Nvidia Ampere GPUs. At peak performance, processing speeds hit 250 petaFLOPS.

5. Summit

Summit was the world’s fastest computer when it debuted in 2018, and holds a current top speed of 200 petaFLOPS. The United States Department of Energy sponsored the project, operated by IBM, with a $325 million contract. Using AI, material science and genomics, the 9,300 square-foot machine has been used to simulate earthquakes and extreme weather conditions and predict the lifespan of neutrinos. Like Frontier, Summit is hosted by the Oak Ridge National Laboratory in Tennessee.

Related ReadingWill Exascale Computing Change Everything?

 

Supercomputers vs. General-Purpose Computers

Processing power is the main difference that separates supercomputers from your average, everyday laptop. This can be credited to the multiple CPUs built into their architecture, which outnumber the sole CPU found in a general-purpose computer by tens of thousands.

In terms of speed, the typical performance of an everyday device —which is measured between one gigaFLOPS to tens of teraFLOPS, ranging from one billion to 10 trillion computations per second — pales in comparison to today’s 100-petascale machines, capable of solving 100 trillion computations per second.

The other big difference is size. A laptop slips easily into a tote bag. But scalable supercomputing machines weigh tons and have a square-footage in the thousands. They generate so much heat — which, in some cases, is repurposed to heat local towns — that they require a built-in cooling system to properly function.

 

Supercomputers vs. Quantum Computers

While supercomputers use classical computing hardware and binary bits, quantum computers use quantum hardware, principles of quantum mechanics and quantum bits (qubits) in order to process calculations. Qubits can store information as a 0, 1 or both simultaneously, allowing quantum computers to sometimes operate faster than supercomputers and solve problems that can be too complex for supercomputers, like molecular simulations or optimization problems.

Supercomputer hardware is also deterministic (computing that produces a specific, same output or solution given the same input or problem), where quantum computer hardware is probabilistic (computing that accounts for randomness and produces an approximate solution to a problem).

Currently, supercomputers can be found across government and industrial settings, while quantum computing technology is still developing and largely used in research settings. 

 

Supercomputers and Artificial Intelligence

Supercomputers can train various AI models at quicker speeds while processing larger, more detailed data sets.

Plus, AI will actually lighten a supercomputer’s workload, as it uses lower precision calculations that are then cross-checked for accuracy. AI heavily relies on algorithms, which, over time, lets the data do the programming.

Paired together, AI and supercomputers have boosted the number of calculations per second of today’s fastest supercomputer by an interval of six.

This pairing has rendered an entirely new standard to measure performance, known as the HPL-MxP benchmark. It balances traditional hardware-based metrics with algorithmic computation.

Dongarra thinks supercomputers will shape the future of AI, though exactly how that will happen isn’t entirely foreseeable.

“To some extent, the computers that are being developed today will be used for applications that need artificial intelligence, deep learning and neuro-networking computations,” Dongarra said. “It’s going to be a tool that aids scientists in understanding and solving some of the most challenging problems we have.”

 

Watch as countries compete for the top spot in supercomputing over the past 80 years. | Video: galactika!

History of Supercomputers

While “super computing” was originally coined by now-defunct newspaper New York World in 1929, describing large IBM tabulators at Columbia University that could read 150 punched cards per minute, the world’s first supercomputer — the CDC 6600 — didn’t arrive onto the scene until 1964.

Even though computers of this era were built with only one processor, this model managed to outperform its peer machines — more specifically, the leader at that time, which was the 7030 Stretch — threefold, which is exactly what made it so “super.” Designed by Seymour Cray, the CDC 6600 was capable of completing three million calculations per second. Built with 400,000 transistors, more than 100 miles of hand wiring and a Freon-cooling system, the CDC 6600 was about four file cabinets in size and sold for about $8 million — what would be $78 million today.

Cray’s vector supercomputers would dominate until the 1990s, when a new concept — known as massive parallel processing — took over. These systems ushered in the modern era of supercomputing, where multiple computers concurrently solve problems in unison.

Beginning with Fujitsus Numerical Wind Tunnel in 1994, the global spotlight shifted from American labs over to those in Japan. This model accelerated processing speeds by increasing the number of processors from a standard of eight units to 167. Just two years later, Hitachi pushed this into the thousands when it built its namesake SR2201, with a total of 2,048 processors.

By the turn of the 21st century, it became the norm to design petascale supercomputers with tens of thousands of CPUs. Increasing the number of cores, interconnects, memory capacity, power efficiency as well as incorporating GPUs, artificial intelligence and edge computing are some of the latest efforts defining supercomputing today.

With Frontier’s debut in May 2022, we have entered the era of exascale supercomputing, producing machines capable of computing one quintillion calculations per second.

Related ReadingParallel Processing Examples and Applications to Know

 

Future of Supercomputing

Your current smartphone is as fast as a supercomputer was in 1994 — one that had 1,000 processors and did nuclear simulations. With such rapid acceleration, it’s natural to wonder what comes next.

Just around the corner, two exascale supercomputing systems, Aurora and El Capitan, are planned to be installed in United States-based laboratories in 2024, with plans to create neural maps and research ways to accelerate industry.

“There are limitations on what we can do today on a supercomputer,” Mike Papka, division director of the Argonne Leadership Computing Facility, told Built In. “Right now, we can do simulations of the evolution of the universe. But with Aurora, we’ll be able to do that in a more realistic manner, with more physics and more chemistry added to them. We’re starting to do things like try to understand how different drugs interact with each other and, say, some form of cancer. We’ll be able to do that on an even larger scale with Aurora.”

The deployment of Europe’s first exascale supercomputer, named Jupiter, is also scheduled for 2024, and will focus on climate change and sustainable energy solutions as well as how to best combat a pandemic.

Current supercomputing trends indicate a continuation of AI’s stronghold on science and tech innovation. Argonne National Lab’s laboratory director Rick Stevens sees AI-based inferencing techniques, so-called “surrogate machine learning models,” replacing simulations altogether, Inside HPC reports.

The influence of machine learning and deep learning can be seen in the growing interest to build systems with GPU-heavy architecture, specialized in multiple parallel processing.

“GPU-heavy machine learning and artificial intelligence calculations are gaining popularity so much so that there are supercomputers dedicated for GPU-based computation only,” Bhattacharya said.

“Traditionally, communication and computation technologies were at the heart of the supercomputer and its advancement,” he explained. “However, as individual computers become more power hungry, datacenter designers have had to shift their focus on the adequate and sustained cooling of these machines.”

Quantum computation is also rapidly advancing, Bhattacharya said, and could potentially team with supercomputers to take on unresolved societal quandaries together sooner than we think. While each excel in their own right, quantum computers guide the way to understanding life on the quantum scale, capable of modeling the state of an atom or a molecule.

Now that the top speeds of today’s machines have breached the exascale, the race to the zettascale has begun. Based on the trends of the last 30 years, Bhattacharya predicts the record will be bested within a decade.

Frequently Asked Questions

Supercomputers are powerful, high-performing computer systems that can process lots of data and solve complex calculations at fast speeds, thanks to their ability to split tasks into multiple parts and work on them in parallel.

Supercomputers are commonly used for making predictions with advanced modeling and simulations. This can be applied to climate research, weather forecasting, genomic sequencing, space exploration, aviation engineering and more.

Normal computers carry out one task at a time, while supercomputers can execute many tasks at once. Additionally, supercomputers are much faster, bigger and have more processing power than everyday computers used by consumers.

Frontier is the fastest supercomputer in the world as of 2024, exceeding a quintillion calculations per second. The supercomputer, built by Hewlett Packard Enterprise, is based at the Oak Ridge National Laboratory in Tennessee, United States.

The exact number of supercomputers that exist in the world is unknown, though at least 500 supercomputers are known to be in operation according to the TOP500 project.

An earlier version of this story was written by Mike Thomas and published in 2019. Hal Koss contributed reporting to this story.

Explore Job Matches.