High-performance computing, or HPC, may sound niche, but it influences basically everything. Even soda cans.
Yes, cans are low-tech, but they’re high stakes. People use them constantly — globally, we use about 6,700 per second — which means the slightest design flaw wastes tons of aluminum and millions of dollars.
To ensure ideal construction, every element of modern cans has been “super-optimized,” according to Robert Combier, director of product marketing at Rescale.
What Is High-Performance Computing?
Such optimization usually involves high-performance computing systems, or networked clusters of computing cores. HPC can, in extreme cases, involve supercomputers — the highest-performance computers of all — but most HPC projects don’t require that much power. They simply require more power and speed than a lone desktop can provide.
We’ve rounded up a handful of ways high-performance computing tools have been put to use across five industries
Top Industries Using High-Performance Computing (HPC)
- Healthcare
- Engineering
- Aerospace
- Urban planning
- Finance and business
Healthcare and High-Performance Computing
Medicine and computing are as intimately intertwined as DNA’s double helix. Computers already store confidential patient information, track vital signs and analyze drug efficacy. The rise of HPC has allowed medical professionals to digitize even more complex processes, too, like genome sequencing and drug testing.
FASTER HEALTHCARE
Location: Houston, Texas
When it comes to complex workloads, HPC can do the trick in scaling and managing this amount of data, and becomes especially useful for healthcare computing operations. Hewlett Packard Enterprise hosts various hardware and software products for HPC deployment and performance, as well as AI-integrated solutions and consulting services. The company’s HPC technology allows professionals to process data in near-real time and receive insights for diagnoses, clinical trials or immediate intervention.
CANCER TREATMENT
Location: Austin, Texas
Researchers at the University of Texas at Austin use HPC to refine the art of cancer treatment. In a breakthrough 2017 project, researchers scanned petabytes of data for correlations between a cancer patient’s genome and the composition of their tumors. This set the stage for HPC to be used in further cancer research by the university, now ranging in projects to characterize and treat prostate, blood-related, liver and skin cancers.
“[HPC] has been vital to our analysis of cancer genomics data, both for providing the necessary computational power and the security needed for handling sensitive patient genomic datasets,” said Vishy Iyer, a professor of molecular bioscience and one of the early project leaders.
However, the university’s high-powered cluster helps with every facet of the battle against cancer: drug development, diagnosis, even drafting personalized treatment plans.
GENOMIC SEQUENCING
Location: San Diego, California
Before we delve into what this institute does, some context: sequencing the first human genome took 13 years. That job finished back in 2003, and since then, the process has been streamlined beyond recognition. Rady Children’s Institute’s team can now sequence a newborn’s genome in less than a day using something called DRAGEN. An end-to-end HPC technology tool, it set the Guinness World Record for genomic sequencing in 2018 by completing its task in a mere 19.5 hours — 6.5 hours faster than the previous record.
Engineering and High-Performance Computing
Engineering is all about boosting a machine’s real-world performance, but testing prototypes is expensive (and occasionally dangerous). To work around this, engineers often test new designs in massive computer simulations. Just like the real world, these simulated worlds have gravity, heat, wind and a sprinkling of chaos. Unlike the real world, they run on HPC systems. Thus far, simulations have been used to test the functionality of airplane parts, streamline racing bike frames and much more.
LIGHTER AIRCRAFTS
Location: Arlington, Virginia
Before they founded Rescale, company founders Joris Poort and Adam McKenzie worked at Boeing. Specifically, they worked on engineering the 787, a commercial jet known for its fuel-efficiency. Bulkiness is fuel-efficient, Poort and McKenzie focused on reducing the jet’s weight.
This required running massive optimization functions — essentially, it was an HPC project. At the time, though, HPC wasn’t a mainstream concept, so Poort and McKenzie cobbled together multi-core processing power on the fly.
“They took… leftover computer resources that weren’t being used by the regular engineering department to run super massive optimization routines on the weekend and at night,” Combier said. “It's kind of like using the HOV lane when no one else is using it.”
It worked. By shaving more than 200 pounds off the 787, they saved Boeing more than $200 million, Combier said. Their improvisation, they realized, was actually a major logistical feat — and the seed of a business venture, Rescale, which was founded in 2011.
STREAMLINED BIKES
Location: Waterloo, Wisconsin
Trek Bicycles uses Rescale’s HPC platform to optimize the aerodynamics of its bikes. Recently, for instance, the company ran a simulation through Rescale’s interface to explore how bikes performed in drafting formations. (Drafting is a cycling technique where cyclists form a single-file line, and the person at the front “pulls,” or takes the brunt of the wind resistance.) Using two terabytes of on-demand computing power from Rescale, Trek quickly evaluated bike performance from multiple angles.
FUEL CONSERVATION
Location: Washington, D.C.
Launched in 2010, the Department of Energy’s SuperTruck initiative focused on the invention of truck add-ons that could boost fuel efficiency. That meant studying what held trucks back. Which parts of their design generated the most preventable drag?
Using the supercomputer at Lawrence Livermore Laboratory, researchers found that trucks should wear skirts. Different from the human fashion staple in both function and form, they’re actually panels that attach to a long-haul truck’s bottom flanks in order to minimize airflow by filling the gap between front and back wheels. (Skirts can also fill gaps between trucks and trailers.) These devices can save up to $5,000 of fuel per truck per year, by one estimate.
AUTONOMOUS VEHICLE TECHNOLOGY
Location: Redmond, Washington
How do self-driving vehicles know when to stop at a sign or avoid hazards? All these decisions are made through complex machine learning algorithms, some of which are supported through HPC. Microsoft’s Azure HPC technology has helped carry out perception validation and decision-making for autonomous vehicles, as well as create multiphysics and crash test simulations. Azure’s capability to be situated on the cloud or hybrid cloud also reduces the strain of data workloads in comparison to on-premises solutions, subsequently reducing the chance of accidents.
Aerospace and High-Performance Computing
Outer space is full of unknowns. Is it inhabited by aliens? If so, are they friendly? (Probably. And probably not.) Is a meteorite poised to collide with the Earth — and hit you, personally? More seriously: Where did the universe come from, and what’s the weather like on the sun? Those questions have major implications for us Earthlings, but it takes a lot of resources and technological savvy to gather the data that’s necessary to find answers. That’s where models rooted in HPC come in handy. They make the most of information gleaned by probes and satellites.
SPACE RESEARCH
Location: Berkeley, California
This University of California at Berkeley project dissects the current state of the universe in an effort to decipher its origins. Did it start with the Big Bang? What was the Big Bang like, besides big?
POLARBEAR researchers look for clues in the universe’s Cosmic Microwave Background, or CMB. This means amassing a lot of data — “nearly one gigabyte of data every day that must be analyzed in real time,” according to Brian Keating, a lead on the project. “This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data.”
To manage the workload, researchers rely on the Gordon supercomputer at the University of California San Diego. That might sound bureaucratic, but to put it in a more exciting way, HPC could unlock the mysteries of the universe.
SOLAR FLARE DETECTION
Location: Washington, D.C.
Caused by storms on the sun’s surface, solar flares occur 93 million miles away. Even so, they send streams of charged particles into the Earth’s atmosphere, which can break up radio communications and disrupt GPS navigation if preventative measures aren’t taken.
Here’s one example: Researchers at NASA’s Frontier Development Lab taught a deep-learning algorithm to predict flares based on photos of the sun snapped from an orbiting observatory — a process that required HPC. The algorithm can now infer solar weather more accurately than previous models.
SIMULIA
Location: Vélizy-Villacoublay, France
The amount of time, money and resources that are needed to manufacture updated aircrafts already makes the process a precarious one, though as airline industry demands only continue to increase, the combined variables makes for a pressuring task. With HPC however, this weight can be taken off aircraft engineers’ shoulders.
Simulia, an HPC-powered simulation software developed by Dassault Systèmes, uses computational fluid dynamics to closely simulate the conditions of aircraft flight. With the software, engineers can view flight conditions and adjust construction accordingly, without spending time and resources on test aircrafts or flights. Simulia’s applications could range from simulating commercial aircraft conditions to defense and spacecraft conditions.
Urban Planning and High-Performance Computing
Smart people are book smart and street smart. A smart city is data smart. Major metropolises across the globe have begun collecting sensor data on weather, traffic patterns and noise levels, all of which allow officials to make data-driven decisions about everything from when to issue smog warnings to how often trains should run. It also lets them quantify longer-term issues like climate change and infrastructure deca.
Because smart city sensor networks collect so much data, they need HPC to parse it all.
SMOG LEVEL FORECASTING
Location: Iowa City, Iowa
The idea for this 2018 project came from a University of Iowa graduate student named Pablo Saide, who grew up in Santiago, Chile. Encircled by mountains and practically windless, the city is famous (or infamous) for its smog, but officials share smog warnings only 24 hours in advance. Saide saw firsthand how this hamstrung public health efforts. People in his hometown struggled with asthma, cancer and other smog-exacerbated conditions.
The city needed a better system, and Saide collaborated with U of I engineers to create one. A model built on the university’s HPC cluster, capable of forecasting smog incidents 48 hours in advance. The research team drew on data that tracked smog plume movement, weather forecasts and air-quality, culled from monitoring stations across Santiago.
THE ARRAY OF THINGS PROJECT
Location: Chicago, Illinois
Devices like Tesla cars, smart thermostats and smart light bulbs are well-known components of the Internet of Things — a network of everyday objects that can share and receive digital data. Hundreds of Chicago lampposts are part of this same web.
The city’s Array of Things project, launched in 2016, has installed a network of versatile sensors at the top of outdoor light posts. (The ultimate goal: 500 sensors citywide.) These sensors are not intended for tracking individuals, per the privacy policy, but for tracking macro issues of climate and infrastructure. Depending on how they’re programmed, these sensors can collect data on temperature, light, barometric pressure, traffic and carbon monoxide levels.
Which poses a computing challenge.
“The amount of data we want to analyze would swamp any network,” said Pete Beckman, one of the sensors’ lead designers. “[It] can’t be sent back to the data center for processing, it has to be processed right there in a small, parallel supercomputer.”
In other words, the networked sensors all process their own data in concert — a high-tech scheme in which edge computing and HPC merge.
CONSTRUCTION
Location: Tianjin, China
The National Supercomputing Center lab at the National University of Defense Technology in Tianjin houses supercomputer Tianhe-1A, one of the 100 most powerful in the world. A jack of all trades, it routinely handles more than 1,000 jobs per day. One of its more interesting ongoing tasks, though, is simulating and optimizing construction projects. It helps identify ideal building materials and manage how they’re transported to the construction site; the computer also ensures the crews use the power grid efficiently.
The supercomputer’s planning prowess has eco-friendly and budget-friendly implications: “[Big] data-based modeling of a subway project can reduce construction costs by 10 to 20 percent,” said Meng Xiangfei, head of the Center’s applications department.
Finance & Business and High-Performance Computing
HPC systems are essentially normal computers on steroids. They’re massively powerful — some supercomputers work more than a million times faster than a desktop — and all that power doesn’t just allow engineers and researchers to tackle complex problems. It’s also lucrative. In a cryptocurrency context, HPC systems essentially print money. The larger world of commerce isn’t so different; HPC systems give businesses a commercial edge when it comes to product development and day-to-day agility.
Bitcoin
Location: The web!
It would take months to fully explain the internal workings of bitcoin, the original and most famous cryptocurrency. In an HPC context, though, the key thing is that bitcoin must be “mined.” In a digital context, mining is less about pickaxes than solving complex computational math problems on the bitcoin network. Every time a computer solves one, an algorithm mints fresh bitcoin and deposits it in the computer owner’s account.
Like gold mining, bitcoin mining has always required a bit of luck. The problems were just that hard, and they’ve gotten harder since Bitcoin’s 2008 debut. At this point, they’re above a personal computer’s paygrade. So miners have started pooling their hardware, rigging individual desktops into improvised, multi-core HPC setups.
SKYSCRAPER PLANNING
Location: Guelph, Canada
An engineering consulting firm, RWDI has worked on famous buildings including New York’s Freedom Tower and the world’s tallest skyscraper: Dubai’s Burj Khalifa. The firm’s work involves energy and water modeling, structural soundness checks and other technological assessments — computing-intensive tasks for which they’d long relied on an in-house HPC system.
A wind-engineering project in the Middle East, however, required more than a million core-hours and was poised to push their system beyond capacity. So the company collaborated with Rescale to build a hybrid HPC system — partially on-premise, partially in the cloud. It was a savvy business decision: It helped RWDI nail a lucrative project, and the flexible hybrid structure meant that future projects with different computing demands could easily be accommodated.
PRODUCT DEVELOPMENT
Location: Tokyo, Japan
Rescale client AGC is a titan of the glass industry, which is no small feat. As Combier pointed out, glass is everywhere — from phone screens to sliding doors — and improperly designed glass can cause severe injuries. Combier knows that firsthand. When he was young, his cousin ran into a glass door and cracked it. A “huge chunk of glass came down like a guillotine, and cut her really badly,” he said.
AGC’s engineering team runs a constant stream of simulations to prevent incidents like this. The company’s business model relies on what manager Atsuto Hashimoto calls “simulation-driven product development.” Rescale’s HPC system makes these models possible, allowing engineers to create innovative new glasswork for buildings and cars that won’t shatter under pressure.
VIRTUAL REALITY
Location: Stuttgart, Germany
There’s big talk of the metaverse entering into the mainstream, and to do so will take some heavy-duty computing power. An example of how virtual reality can be powered is shown through HPC’s impact on the CAVE. A cave automatic virtual environment, or CAVE, describes a cube-shaped room covered with projection screens to create an immersive VR environment. The High-Performance Computing Center at Stuttgart utilizes HPC technology to build simulated environments so users can collaboratively interact with data or simply play around with ideas. The organization’s CAVE is frequently used for data visualization in fluid dynamics, structural mechanics, architecture modeling and media arts, providing a peek into how people can work within a VR world.