UPDATED BY
Rose Velazquez | Jul 08, 2022

Walking among the rows of supercomputer cabinets in Argonne National Laboratory’s Leadership Computing Facility is kind of like wandering through a high-tech version of The Shining’s Overlook Maze — minus the ax-wielding madman.

What Is a Supercomputer?

Supercomputers divide problems or tasks into multiple parts that are worked on simultaneously by thousands of processors, making them dramatically faster than the everyday laptop or desktop computer.

The facility, located about 25 miles from Chicago, is home to supercomputing resources housed in a 25,000-square-foot data center with low ceilings and a white tile floor. With all of that equipment whirring away, it is not a quiet place. Nearest the computers, visitors must speak-shout in order to be heard above a constant loud hum.

With support from the U.S. Department of Energy, Argonne’s work bolsters groundbreaking research.

This kind of state-of-the-art technology is the backbone of “changing the way scientists explore the evolution of our universe, biological systems, weather forecasting and even renewable energy.”

 

theta supercomputer argonne national laboratory
Theta is one of two supercomputers at the Argonne National Laboratory. | Photo: Argonne National Laboratory

What Is a Supercomputer?

Supercomputers have for years employed a technique called “massively parallel processing,” whereby problems are split into parts and worked on simultaneously by thousands of processors as opposed to the one-at-a-time “serial” method of, say, your regular old MacBook Air. Here’s another good analogy, this one from Explainthatstuff.com:

It’s like arriving at a checkout with a cart full of items, but then splitting your items up between several different friends. Each friend can go through a separate checkout with a few of the items and pay separately. Once you’ve all paid, you can get together again, load up the cart, and leave. The more items there are and the more friends you have, the faster it gets to do things by parallel processing — at least, in theory.

“You have to use parallel computing to really take advantage of the power of the supercomputer,” said Caitlin Joann Ross, who did a six-month residency at Argonne while she was a doctoral candidate at Rensselaer Polytechnic Institute. “You have to understand how data needs to be exchanged between processes in order to do it in an efficient way, so there are a lot of different little challenges that make it a lot of fun to work with. Although there are days when it can certainly be frustrating.”

“Debugging” issues, she told Built In in 2019, are the chief cause of that frustration. Calculations that might run smoothly using four processors, for instance, could break down if a fifth is added. 

“If you’ve got everything running perfectly,” Ross said, “then whatever it is that you’re running is running a lot faster than it might on a computer with fewer processors or a single processor. There are certain computations that might take weeks or months to run on your laptop, but if you can parallelize it efficiently to run on a supercomputer, it might take a day.”

Another area of Ross’ work involved simulating supercomputers themselves — more specifically, the networks used on supercomputers. Data from applications that run on actual supercomputers is fed into a simulator, which allows various functions to be tested without taking the whole system offline. Something called “communications interference” is one of those functions.

“In real life, different users will submit jobs to the supercomputer, which will do some type of scheduling to determine when those jobs run,” Ross said. “There will typically be multiple different jobs running on the supercomputer at the same time. They use different compute nodes, but they share the network resources. So the communication from someone else’s job may slow down your job, based on the way data is routed through the network. With our simulations, we can explore these types of situations and test out things such as other routing protocols that could help improve the performance of the network.

Read More14 High Performance Computing Applications & Examples

 

Israeli neuroscientist Henry Markham talks about building a model of the human brain. | Video: TED

What Do Supercomputers Do?

Supercomputing’s chief contribution to science has been its ever-improving ability to simulate reality.  This capability helps humans make better performance predictions and design better products in fields from manufacturing and oil to pharmaceutical and military. Jack Dongarra, one of the world’s foremost supercomputing experts, likened that ability to having a crystal ball.

“Say I want to understand what happens when two galaxies collide,” Dongarra said. “I can’t really do that experiment. I can’t take two galaxies and collide them. So I have to build a model and run it on a computer. Or in the old days, when they designed a car, they would take that car and crash it into a wall to see how well it stood up to the impact. Well, that’s pretty expensive and time consuming. Today, we don’t do that very often; we build a computer model with all the physics [calculations] and crash it into a simulated wall to understand where the weak points are.”

What Are Supercomputers Used For

Supercomputers are used to simulate various potential outcomes very quickly. These blazing-fast computers are used by government organizations and corporations for everything from finding new oil repositories to developing new life-saving medicines.

Companies, especially, see the monetary value in supercomputing simulations, whether they’re manufacturing cars, drilling for oil or discovering new drugs. High performance computing’s life sciences and finance applications are pushing the market toward a projected value of more than $50 billion by 2028.  

“Industry gets it. They are investing in high performance computers to be more competitive and to gain an edge on their competition. And they feel that money is well spent. They are investing in these things to help drive their products and innovation, their bottom line, their productivity and their profitability,” Dongarra, who spent an early portion of his career at Argonne, said.

But it’s bigger than just ROI. 

“Traditional commercial enterprise can see return on investment calculations of, ’It saved us this amount of physical testing costs,’ or, ’We were able to get to market quicker and therefore gain extra income,’" Andrew Jones, a UK-based high performance computing consultant, said. "But a basic ROI calculation for HPC is not necessarily where the value comes from. If you ask an oil company, it doesn’t come down to being able to find oil 30 percent cheaper. It comes down to being able to find oil or not."

Companies that use supercomputing to make big-picture improvements and increase efficiency have an edge on their competitors. 

“And the same is true for a lot of the science," Jones added. "You’re not necessarily looking for a return on investment in a specific sense, you’re looking for general capability — whether our researchers are able to do science that is internationally competitive or not.”

And on the nuclear weapons testing front, supercomputers have proven a huge boon to things that go boom. Sophisticated simulations have eliminated the need for real-world testing: “They don’t develop something, go out into the desert, drill a hole and see if it works," Dongarra said of a practice that stopped decades ago. "They simulate that [weapon] design on a supercomputer.”

In a major upgrade, the Air Force Research Lab — one of five U.S. Department of Defense supercomputing centers — installed four sharable supercomputers on which the entire U.S. military can conduct classified research.

Supercomputers also impact artificial intelligence. They turbo-charge the machine learning processes to produce quicker results from more data (as in this climate science research). 

“To be engaged in supercomputing is to believe in the power of the algorithm to distill valuable, meaningful information from the repeated implementation of procedural logic,” Scott Fulton III wrote for ZDNet. “At the foundation of supercomputing are two ideals: one that professes that today’s machine will eventually reach a new and extraordinarily valuable solution, followed by a second and more subtle notion that today’s machine is a prototype for tomorrow’s.”

While Dongarra thinks supercomputers will shape the future of AI, exactly how that will happen isn’t entirely foreseeable.

“To some extent, the computers that are being developed today will be used for applications that need artificial intelligence, deep learning and neuro-networking computations,” Dongarra said. “It’s going to be a tool that aids scientists in understanding and solving some of the most challenging problems we have.”

 

How Fast Is a Supercomputer?

Because faster computers allow researchers to more quickly gain greater insight into whatever they’re working on, there’s an ever-mounting need — or at least a strong desire — for speed. Dongarra called it “a never-ending quest,” and sustained exascale capabilities are the pinnacle of that quest so far. Still, it is one of many. 

The Department of Energy labeled exascale computing “the next milestone in the development of supercomputers” — speedier than the most powerful supercomputers to date.

Scores more supercomputers with sometimes epic-sounding names (Titan, Excalibur) operate in 31 other countries around the world. Manufactured by 36 different vendors, they’re driven by 23 generations of processors and serve a variety of industries as well as government functions ranging from scientific research to national defense.

Those stats are from the website TOP500.org. Co-founded by Dongarra, it has kept tabs on all things supercomputing since 1993, and uses his LINPACK Benchmark (which estimates how fast a computer is likely to run one program or many) to measure performances.

10 Fastest Supercomputers in the World (Source: TOP500)

  1. Frontier (U.S.)
  2. Fugaku (Japan)
  3. LUMI (Finland)
  4. Summit (U.S.)
  5. Sierra (U.S.)
  6. Sunway TaihuLight (China)
  7. Perlmutter (U.S.)
  8. Selene (U.S.)
  9. Tianhe-2A (China)
  10. Adastra (France)

“The race between countries is partly real and partly artificial,” Jones said. “So, for example, if you are the director of a U.S. national lab and you’re trying to secure funding for your next HPC machine, it’s a very good argument to say that, ’Well, China’s got one that’s ten times bigger, so we need to catch up.’” 

Government officials also enjoy a bit of supercomputing swagger, talking up their gargantuan processing power as the key to societal improvement — and, of course, evidence of their country’s total awesomeness.

“It’s basic economic competitiveness,” Jones said. “If you drop so far off that your nation is no longer economically competitive with other comparably sized nations, then that leads to a whole load of other political and security issues to deal with.”

 

aurora supercomputer argonne
The extremely powerful and fast Aurora supercomputer is expected to make its way to Argonne National Laboratory sometime in 2021. | Photo: Argonne National Laboratory

Understanding Exascale Computing Speed

The world’s first exascale supercomputer, Frontier, made its debut at Tennessee’s Oak Ridge National Laboratory — another U.S. Department of Energy partner — in May 2022. The speediest of all supercomputers, Frontier earned the distinction of “the first true exascale machine.”

Frontier is among three exascale supercomputers receiving a chunk of a $1.8 billion investment from the U.S. Department of Energy. These supercomputers are capable of performing a billion billion (aka quintillion) calculations per second, putting them in a position to carry out minor computational miracles.

Installation of another of those three exascale supercomputers dubbed Aurora also began in May 2022 at Argonne. In preparation for its arrival, the computing facility undertook a major expansion.

Measured as 1018 FLOPS (which stands for floating point operations per second), an exascale system is six-billion-times faster than its long ago predecessor, the groundbreaking Cray-1 from 1964. Put in more tangible terms courtesy of Design News, “A person adding 1+1+1 into a hand calculator once per second, without time off to eat or sleep, would need 31.7 trillion years to do what Aurora will do in one second.” 

“There are limitations on what we can do today on a supercomputer,” Mike Papka, division director of the Leadership Computing Facility, said after giving a tour of the space. “With Aurora, we can take those to the next level. Right now, we can do simulations of the evolution of the universe. But with Aurora, we’ll be able to do that in a more realistic manner, with more physics and more chemistry added to them. We’re starting to do things like try to understand how different drugs interact with each other and, say, some form of cancer. We can do that on a small scale now. We’ll be able to do that on an even larger scale with Aurora.” 

At a March 2019 press conference announcing Aurora’s installation, Argonne associate laboratory director Rick Stevens explained that the system will handle high performance computing applications as well as analysis of streaming data that’s generated by accelerators, detectors, telescopes and other research equipment.

When the “father of supercomputing,” Seymour Cray, first began building his revolutionary machines in the 1960s, such a rippling display of computational muscle was incomprehensible. More than a half century later, it’s slowly becoming the norm — and will someday seem as quaint as an Atari 2600 does now.

Further ReadingQuantum Computing: Everything You Need to Know

 

The Future of Supercomputing

Your current smartphone is as fast as a supercomputer was in 1994 — one that had 1,000 processors and did nuclear simulations. (Is there an app for that?) It goes to reason, then, that the smartphone you have in a quarter-century could theoretically be on the level of Aurora. The point is, this stuff is speedy — and it’s only getting speedier.

“When I started in computing, we were doing megaflops — 106 operations. So things change. There are changes in architecture, changes in software and applications that have to move along with that. Going to the next level is a natural progression,” Dongarra said.

“The betterment of mankind is a noble goal to have.”

A TOP500.com story paints a picture of things to come in which simulations take a back seat.

“Machine learning, in particular, could come to dominate most computing domains, including HPC (and even data analytics) over the next decade-and-a-half,” author Michael Feldman wrote. “While today it’s mostly used as an auxiliary step in traditional scientific computing – for both pre-processing and post-processing simulations, in some cases, like drug discovery, it could conceivably replace simulations altogether.”

Whatever form supercomputers take, Argonne’s Papka said they’ll become increasingly powerful and transformative, affecting everything from the pedestrian to the profound — from the design of more efficient electric car batteries to, just maybe, the eradication of long-battled diseases like cancer. Or so he hopes.  

“The betterment of mankind,” Papka said, “is a noble goal to have.”

Great Companies Need Great People. That's Where We Come In.

Recruit With Us