UPDATED BY
Brennan Whitfield | Oct 21, 2022

Take all the help you can get. If parallel processing has a central tenet, that might be it. Some of the super-complex computations asked of today’s hardware are so demanding that the compute burden must be borne by multiple processors, effectively splitting up or “parallelizing” whatever task is being performed. The result? Slashed latencies and turbocharged completion times.

What Is Parallel Processing?

Parallel processing or parallel computing refers to the action of speeding up a computational task by dividing it into smaller jobs across multiple processors. Some applications for parallel processing include computational astrophysics, geoprocessing, financial risk management, video color correction and medical imaging.

Believe it or not, the circuit your computer uses to render fancy graphics for video games and 3D animations is built from the same root architecture as the circuits that make possible accurate climate pattern prediction. Wild, huh? And graphic processing units’ (GPU) parallel infrastructure continues to power the most powerful computers.

“If you look at the workhorses for the scientific community today, the new computers, like [IBM supercomputer] Summit, and also the next generation, like Aurora, they’re largely based on this [infrastructure] model now,” said Wen-mei Hwu, a professor of electrical and computer engineering at the University of Illinois-Urbana Champaign, who is also considered a godfather of parallel computing.

Here are just a few ways parallel computing is helping improve results and solve the previously unsolvable.

 

Parallel Processing in Aerospace and Energy

When you tap the Weather Channel app on your phone to check the day’s forecast, thank parallel processing. Not because your phone is running multiple applications, but because maps of climate and weather patterns require the serious computational heft of parallel.

Parallel computing is the backbone of other scientific studies, too, including astrophysic simulations, seismic surveying, quantum chromodynamics and more. Take a closer look at a couple applications.

 

Location: Chicago, Illinois 

To understand the holistic impact of climate and climate change over time, one team from The University of Chicago Computation Institute is utilizing parallel processing to do so. Known as the parallel System for Integrating Impact Models and Sectors (pSIMS) project, the current framework processes data through multiple supercomputers, clusters and cloud computing technologies to create simultaneous models of environments like forests and oceans. 

The pSIMS code has already been used to simulate models for global food system resilience by a U.S. and U.K. joint task force, and is on track to scale its simulation capabilities up to exascale by 2023.

 

Location: Evanston, Illinois

Astronomy moves slowly. It can take millions of years for stars to collide, galaxies to merge or black holes to swallow astronomical objects — which is why astrophysicists must turn to computer simulations to study these kinds of processes. And such complex models demand massive compute power.

A 2019 breakthrough in the study of black holes, for example, happened courtesy of a parallel supercomputer. Researchers at Northwestern University, the University of Amsterdam and the University of Oxford solved a four-decade-old mystery, proving that the innermost part of matter that orbits, then collapses into, black holes aligns with those black holes. That’s key to helping scientists better understand how this still-mysterious phenomenon behaves.

“These details around the black hole may seem small, but they enormously impact what happens in the galaxy as a whole,” said researcher Alexander Tchekhovskoy of Northwestern University. “They control how fast the black holes spin and, as a result, what effect black holes have on their entire galaxies.”

 

Location: Houston, Texas

One of the oil industry’s biggest players lives in suburban Houston and goes by the name Bubba. Bubba happens to be a supercomputer (among the fastest on the planet) owned by geoprocessing company DownUnder GeoSolutions.

Seismic data processing has long helped provide a clearer picture of underground strata, an obvious must for industries like oil and gas. Supercomputing, though, is practically the norm in the energy field nowadays — especially as algorithms process massive amounts of data to help drillers mine difficult terrain, like salt domes. 

Bubba’s backbone is formed by thousands of Intel Xeon Phi multiprocessors that are cooled in chilled oil baths, a technique that allows for extremely high-performance parallel processing. The hope is that by selling parallel power access to third-party companies, fewer energy outfits will feel compelled to build their own, less efficient systems.

 

Parallel Processing in Business and Finance

Even though parallel computing is often the domain of academic and government research institutions, the business and finance spheres have definitely taken notice. 

“The banking industry, investment industry traders, cryptocurrency — those are the big communities that are using a lot of GPUs for making money,” Hwu said.

 

Location: Cupertino, California 

Some of the most impressive parallel processing capabilities aren’t exclusive to supercomputer centers — one powerful example may even be sitting in your pocket or on your desk. Apple is no stranger to using multi-core processors to power its hardware, and pushes to keep increasing the speed of smartphones and computers with each new iteration. The company’s newest chip used in the iPhone 14, the A15 Bionic chip, houses a 5-core GPU, a 6-core CPU and a 16-core NPU. Thanks to this scale of parallel processing (in a remarkably tiny space), devices like these boast some of the fastest download and performance times on a smartphone.

 

Location: Palo Alto, California

Due to the complexity of blockchain verification, mining crypto and executing smart contracts take an abundance of computing power. To avoid any traffic, some companies like Aptos are taking to parallel processing to speed up these operations. Block-STM, the company’s multithreaded parallel execution engine, works to manage and automatically adapt to the workload of smart contract conflicts. Using the power of multiple CPUs on several database operations at a time, Block-STM validates over 160,000 transactions per second on Aptos’ blockchain.

 

Location: San Francisco, California

Nearly every major aspect of today’s banking, from credit scoring to risk modeling to fraud detection, is powered by GPU-accelerated fintech. In a way, the departure from traditional CPU-powered analysis was inevitable. GPU offloading came of age around 2008, just as lawmakers ushered in several rounds of post-crash financial legislation. “It’s not uncommon now to find a bank with tens of thousands of Tesla GPUs,” Xcelerit co-founder Hicham Lahlou told The Next Platform in 2017. “And this wouldn’t have been the case without that mandatory push from regulation.”

One early adopter was JPMorgan Chase, which announced in 2011 that its switch from CPU-only to GPU-CPU hybrid processing had improved risk calculations at its data centers by 40 percent and netted 80 percent savings. Wells Fargo is also using NVIDIA’s GPUs for processes as varied as accelerating AI models for liquidity risk and virtualizing its desktop infrastructure.

 

Parallel Processing in Entertainment

Parallel computing also has roots in the entertainment world — no surprise given that GPUs were first designed for heavy graphics loads. It’s also a boon to industries that rely on computational fluid dynamics, a mechanical analysis that has big commercial applications in gaming, sports and film. 

 

Location: San Francisco, California

How do graphically intensive games like Subnautica and Escape From Tarkov run so smoothly? This all starts in these games’ productions, specifically by the help of the Unity game engine. Unity’s software supports multithreaded programming and parallel processing capabilities, meaning it can take on the creation of highly detailed environments and models without slowing down the overall experience. In development, rendering and algorithm processes can be distributed across different cores simultaneously and at a faster rate, making for more realistic physics in water waves or running through terrain. 

 

Location: Canonsburg, Pennsylvania

Famous for setting record race times from the Pikes Peak International Hill Climb to Bilster Berg, the Volkswagen I.D. R sports racecar can look to parallel computing in its production for how quickly it navigated its wins.

Vehicle engineers relied on Ansys Fluent, a fluid simulation software, in at least two key facets: running a virtual simulation of the course, and finding the ideal balance of low weight and aerodynamic drag loss for the battery cooling system.

Such cooling is one of a number of so-called computational fluid dynamics (CFD) simulations users can run on Ansys, which supports parallel GPU acceleration. It’s one of the more headline-worthy examples of how high-powered parallel processing has become a go-to for all manner of CFD research in everything from numerical weather prediction to combustion engine optimization.

 

Location: Fully Remote

If you saw the bright ‘80s-themed scenes of Thor: Love and Thunder or the choreographed antics of Bullet Train, you also saw the handiwork of parallel processing. Both were colored using the Blackmagic Design’s DaVinci Resolve Studio, one of a handful of Hollywood-standard post-production suites that incorporates GPU-accelerated tools. “The high-quality rendering based on what they call the ray-tracing technique are all using some of these processors now,” Hwu said. Color correction and 3D animation both commonly use GPU parallel processing, he added.

 

Parallel Processing in Medicine and Drug Discovery

Emergent technology is reshaping the medical landscape in countless ways, from virtual reality that ameliorates macular degeneration to developments in bioprinting tissue and organs. Parallel processing has for years made its presence felt in this arena, but it’s poised to fuel even more breakthroughs. Here’s how.

 

Location: Santa Clara, California

One of the first industries that saw a significant change from parallel processing, particularly the GPU-for-general-computing revolution, was medical imaging. Today, these capabilities have branched out to also allow accelerated processing for healthcare technology like genomics, biopharmacy and smart hospital operations — with parallel pioneer NVIDIA at the forefront. 

Using the company’s AI-enabled platform, dubbed Clara, healthcare professionals can utilize deep learning frameworks to create 3D models or to automate patient monitoring tasks. Among the institutions that have already signed on are Activ Surgical, Ohio State University and the National Institute of Health.

 

Location: Fully Remote

If you think of parallel processing as a nesting doll, one of the innermost figures could be a life-saving drug. Parallel programming is an ideal architecture for running simulations of molecular dynamics, which has proven to be highly useful in drug discovery.

Medical research company Acellera has developed multiple programs that harness the powerful offloading infrastructure of GPUs: simulation code ACEMD and Python package HTMD. They’ve been used to perform simulations on some of the world’s most powerful computers, including a Titan run that helped scientists better understand how our neurotransmitters communicate. With the technology able to be utilized for genetic disease research, Acellera has partnered with the likes of Janssen and Pfizer for pharma research.

 

Location: Oak Ridge, Tennessee 

Beyond image rendering and pharmaceutical research, parallel processing’s big data analytics power holds great promise for public health. In one project, the IBM supercomputer known as Summit, owned by the Oak Ridge National Laboratory, is being used to process the likelihood of mental illness and trajectory in children. Based on health questionnaire answers, Summit trains AI models to apply clusters of research data to human behavior and quickly identify at-risk patients. With the parallel processing technology, the researchers’ hope is to promptly treat and alleviate up to 50 percent of mental illness that can follow into adulthood.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us