“No one has more data on themselves than Mike Snyder,” legal scholar Dov Greenbaum told me.
Snyder is the director of the genomics and personalized medicine department at Stanford University School of Medicine. He’s also one of the researchers behind a widely cited 2012 study that examined the molecular information — down to the level of individual genomes — of one individual over a 14-month period. That individual was Snyder himself.
Most medical research relies on groups of subjects, but Snyder and his colleagues went a different route: By collecting a whole lot of longitudinal data from just one person, they showed an early proof of concept for the type of personalized medicine that until recently would have sounded science fiction-y.
But what about all the Snyder-related data that came out of the study? According to Greenbaum, it’s the most robust collection of digital information on a single human being we have.
If Mike Snyder (the person) is made up of physical information, and if a hard drive in Snyder’s lab contains a digital version of that information, could we reasonably argue that the hard drive contains a copy of Mike Snyder?
“It’d be interesting to talk to Mike and see whether he thinks he has enough data where he could create, you know, a shadow Mike,” Greenbaum mused. “Something that closely represents who he is.”
By “shadow Mike,” we mean a digital twin, or a digital representation of a real-world entity.
“Digital twins” first emerged in the context of manufacturing and product lifecycle management about 20 years ago. Since then, the concept has spread to other spaces, like healthcare, construction and urban planning. (In fact, this organization is apparently building a digital twin of, um, the entire nation of Great Britain.)
And as our computing power catches up with our imaginations, more and more physical entities may end up with digital counterparts — including you and me.
What Is a Digital Twin?
Where Did Digital Twins Come From?
Back when Michael Grieves, now chief scientist for advanced manufacturing at Florida Institute of Technology, was a doctoral student in the late ’90s, he didn’t think a paper about strategies for moving physical work into virtual spaces would make it past the dissertation committee.
“If you do something novel in a doctoral program, your chances of getting through decrease pretty dramatically,” he said.
So he wrote a dissertation on business communications instead. But that didn’t stop him from considering new ways to meld the physical world with the digital.
At the time, engineers and manufacturers were taking the first steps from physical models and 2D blueprints into 3D computer-aided design (CAD) models. Meanwhile, computer processing power was poised to grow exponentially. Grieves, who’d worked on the first supercomputer, ILLIAC IV, suspected that, if those 3D models could take in more information, they’d provide unexpected value. Namely, work that used to only happen in the physical world could move into the virtual one, and possibilities that used to stay trapped in our heads could play out in digital spaces.
“Only when I got it all right did I have to go out and move around expensive atoms.”
“There was this idea that I could create the product virtually, test it virtually, manufacture it virtually and support it virtually, and only when I got it all right did I have to go out and move around expensive atoms,” Grieves said.
When Grieves debuted his vision for the first time at a Society of Manufacturing Engineers conference in Troy, Michigan, in 2002, he presented it under the umbrella of “product lifecycle management.” It wasn’t until seven years later, when Grieves’ colleague, NASA Principal Technologist John Vickers, included the concept in an organizational roadmap report, that it received a snappy name: digital twin.
Consulting at NASA helped Grieves crystallize the value proposition for digital twins, he said: If the products you make get flung far into space, you can’t rely on physical proximity to understand and improve them. Add to that the costliness of aerospace manufacturing, and there’s good reason to do as much work as possible in the digital realm.
What Are Digital Twins Good For?
For manufacturers and their clients, the two main benefits of digital twins are cost savings and increased safety.
NASA, for instance, is currently constructing digital twins of rocket parts made through additive manufacturing, or 3D-printing. Eventually, Vickers said, those twins will make the testing and qualification process easier and more cost-effective than running experiments in the physical world — as Grieves put it, “Atoms are expensive.”
But before that happens, NASA must qualify the twins themselves through hundreds (maybe thousands) of expensive experiments, Vickers said. He guessed that, right now, about 75 percent of experimental data comes from physical tests and 25 percent from the twins — that means the whole approach is more costly than what they were doing before. But once that ratio “flips,” it could lead to huge savings.
That process basically reflects what Grieves and Vickers outlined in a 2017 paper arguing for digital twins as predictors of “undesirable emergent behavior in complex systems” — in other words, those bad, unexpected things that happen when a system has a lot of components.
Digital twins, they wrote, are useful at all stages of a product’s lifecycle. During planning stages, digital twins help designers and engineers model the product’s form, simulate its function and assess its manufacturability. During production, digital twins allow manufacturers to simulate the product’s assembly before expending any physical resources. During operation, digital twins allow for closer monitoring, and, as data from multiple physical products is collected and analyzed, operators gain a clearer understanding of what data signals predict system failures.
The paper named the Challenger and Columbia explosions as examples of disasters that arose from unexpected failures. With digital twins running simulations of countless potentialities before their physical counterparts even exist, systems engineers could save lives, as well as time and money.
But wait — if a digital twin is a virtual replica of a physical entity, how can a digital twin exist before its physical twin does?
That question, Grieves said, dogs him to this day.
“It’s a battle I’ve been fighting with a lot of people,” he said. “It really doesn’t require that I have a physical entity to have a digital twin.”
Rather, it’s the intent to create a physical counterpart that separates a digital twin from a simulation or other model, according to Grieves. In other words: A digital twin is a virtual model of something that exists — or that may soon exist — in the physical world.
What Makes a Digital Twin?
Vickers said that, prior to 2010, a Google search for “digital twin” wouldn’t turn up much. Grieves cites a 2015 article in The Economist as the jumping off point for more widespread interest. Whatever the catalyst, the digital twin evolved from an academic proposal to an AI strategy with the potential to remake systems engineering.
Vickers doesn’t mind the term’s exploding popularity. (“Now, if I got a nickel every time somebody used the term, well,” he added.) But plenty of tech buzzwords have become notorious for overuse, and even misuse. Is “digital twin” doomed to suffer the same fate?
“Any time you have a technology that becomes powerful and sexy, there is the potential for abuse of the term,” Michael Krigsman, an industry analyst and host of the podcast CXOTalk, told me. “‘Digital twin’ doesn’t have quite the same level of potential to devolve into meaninglessness. But will it get adopted and co-opted in the service of marketing for marketing’s sake, without substance? Yes, that will probably happen.”
It happened with “artificial intelligence” itself, after all, Krigsman said. What used to largely refer to machine learning morphed into foggy marketing lingo for all sorts of enterprise software. The same can be said of “digital transformation” — what once meant something specific is now a tough concept to nail down.
If Krigsman is correct, we’ll see the term “digital twin” get misapplied as it spreads to new industries and verticals. But we’re also already seeing the concept pop up under entirely different names.
“Any time you have a technology that becomes powerful and sexy, there is the potential for abuse of the term.”
Karan Talati, for instance, uses “digital thread.”
Talati’s startup First Resonance doesn’t make virtual replicas of 3D objects, but it does make software that lets manufacturers turn their factory floors from collections of disparate machines into systems that talk to each other. One of its products collects and analyzes data coming off machines, the other traces parts from inventory to production and operation. These let users tweak workflows based on real-time data from the factory floor, as well as pass on part-specific data — like test pass-fails or electrical output values — to downstream customers. The more connected sensors, devices and machines users have, the more data they can leverage.
Like Grieves, Talati, who formerly worked at SpaceX, is committed to eroding the barriers between hardware and software, the physical and the digital. But digitization is a spectrum, he said, and his company’s customers run the gamut in their approaches.
“To these big companies like the GEs or Siemens’ of the world — not a shot at them — even digitizing the equipment you have on the floor, that’s digital thread,” he said. “And on the other hand, you have this amazing digital cyberspace mesh, which is like a projected physical-digital world where you’re actually getting geometries and runtimes and equipment efficiency and utilization and interactions with individuals and partners.”
There’s an important difference between collecting data from connected devices and building a Grieves-ian digital twin. But in an industry Talati described as “relatively slow and risk averse,” you’re going to need the first to get to the second, and there are a few important obstacles to hurdle on the way.
First, there’s the heterogeneous data flying off machines from different manufacturers. Currently, hardware companies use different software protocols, so any centralized data platform is tasked with ingesting and making sense of data that doesn’t quite match up.
Then, there’s the tendency of some hardware companies to withhold that machine data entirely. Without access to data, software providers like First Resonance (and any digital twin environment) are hung out to dry.
“What’s next is really connectivity.”
Luckily, the industry is inching in the direction of standardized software protocols and connectivity, Talati said. The free-and-open protocols of Linux won out over closed-and-proprietary Windows in the realm of systems computing, he noted — and the same could happen for connected hardware. And ultimately, equipment manufacturers can only refine their hardware specs so much before those specs become commoditized and they need a new selling point.
“So what’s next, right?” Talati asked. “What’s next is really connectivity.”
That’s good news, because the kind of predictive analytics some digital-twin proponents tout is only possible with staggering amounts of data flowing back and forth between the physical objects and their digital doubles. Right now, much of what gets called predictive analytics is actually just great, data-based pattern recognition and guesswork, Grieves told me. (It’s the difference between inductive and deductive reasoning.)
But as manufacturers become more open to data sharing and savvier about data collection, digital twins — in all their predictive glory — will take shape in the wild. Then, wasteful, waterfall-style development processes may fade away in favor of faster feedback and constant iteration.
“It requires manufacturing and automation and mechanical and thermal engineers and data scientists and designers — even UI and UX designers — to come together to instrument the right machines, create the right automation project and start synthesizing all that data,” Talati concluded. “It’s not going to happen overnight, but it will eventually happen.”
As for Vickers, he has “a grand vision” for digital twins. He predicted they’ll earn a place at the “top of business taxonomies” and continue spreading into new sectors.
“Wouldn’t it be nice,” he mused, “If your general practitioner had a digital twin — all your medical data, all your genomic data — from the time you were born?”
Digital Twins in Healthcare
Dov Greenbaum is director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies. In 2020 he authored a paper about Expanded Access programs, which give seriously ill patients the chance to try investigational drugs outside of standard clinical trials. Why, Greenbaum wondered, aren’t there more attempts to collect clinical data from those efforts?
But he quickly ran into a problem. Generating useful data about drug effectiveness requires a control group — some patients in the Expanded Access program would have to receive a placebo.
“It seems sort of callous, evil, mean, to say, ‘Well, sure, we want to include you, we want to include this Expanded-Access data, but we still want to provide half of you with the placebo,’” he said.
What if, Greenbaum wondered, researchers could give the placebo to a digital twin, instead?
“You collect as much data as you can to represent the real thing. And then you can perturb and test the digital thing without ruining your investment in the real thing,” Greenbaum said, echoing Grieves’ explanation almost to a tee. “If I could create a fake person that I could test whatever I want against, I mean, why not?”
Why not, indeed. Greenbaum is far from the only healthcare thinker drawing on the concept of digital twins — in fact, the San Francisco-based startup Unlearn is doing almost exactly what he proposed. There are traditional, 3D digital twins — like renderings of individual human hearts — as well as 2D data collections that help predict health outcomes for individual people.
“If I could create a fake person that I could test whatever I want against, I mean, why not?”
The up-and-coming field of precision medicine, or high-definition medicine, calls for the type of granular, personal data that makes approaches like this possible. Advances in DNA sequencing, physiological and environmental monitoring, advanced imaging and behavioral tracking — combined with advances in AI — open up a new world of possibilities for collecting and analyzing patient data and using that data to make treatment decisions.
Ali Torkamani and his colleagues at Scripps Translational Science Institute published a survey of high-definition medicine in 2017, including predictions for future innovations. Digital twins made the list.
Unlike Grieves’ and Vickers’ digital twins, which are virtual representations of real-world objects, these twins would be real people, or collections of people, with similar data points to the patient at hand. By comparing individual patients to their sets of digital twins, physicians could form a clearer understanding of what that patient’s data — like a genomic profile, for example — actually means.
In his paper “Digital Twins in Healthcare,” Delft University of Technology professor Koen Bruynseels imagined something closer to the engineering paradigm: “There must be a regular interaction between the model and the biological person, via sensors or measurements,” he told me. “You have not just a model, but a model that’s tuned into that particular person at that particular point in time.”
It’s fair to say that interpretations of “digital twin” in healthcare spaces already vary widely.
And that’s OK. Modeling biological systems is much harder than modeling mechanical ones, Torkamani said. What matters is whether the digital representations — say, of patients’ hearts — can help explain why one patient suffered a heart attack and another didn’t. In other words, it’s about the accuracy of the model, not its verisimilitude.
“It’s a digital twin when it does what you want it to do,” Greenbaum said.
What Will Change When Human-Based Digital Twins Arrive?
In five or 10 years, Torkamani guessed, digital twins will be a powerful tool for predictive medicine, which assesses a patient’s risk based on their own longitudinal data, as well as data about patients with similar profiles.
But, like Talati, Torkamani described his industry as “a pretty conservative space” with plenty standing in the way of digital twin adoption. In the U.S., insurers aren’t inclined to spend money on preventative interventions. Furthermore, in the absence of nationalized healthcare, each hospital system has its own electronic health records system, so data integration is a huge challenge.
“Things we currently consider to be carved in stone — like what’s healthy and what’s disease, and what’s therapy and what are enhancements — you need to start thinking about it in a different way. Now, the measuring stick is different.”
Over time, smart people will iron out those challenges. But in the meantime, according to Bruynseels, patients and providers alike must consider how precision medicine and digital twins will change the way we define important healthcare concepts.
First, “normal” will likely shift from a general concept to an individualized one. (A “normal” temperature, for example, is 98.6 degrees Fahrenheit. Mine is about 96.)
Second, the difference between “therapy” and “enhancement” will be much more difficult to ascertain. (The more we measure about ourselves, Bruynseels said, the more we will want to improve those measurements, whether or not we really need to.)
“Nature doesn’t come with cut-in-stone categories,” he said. “[Defining categories] is what we do, and this is a perfect instrument to do that. You can start classifying people in a much more fine-grained way. You can start rating yourself against much more fine-grained data. And that means the things we currently consider to be carved in stone — like what’s healthy and what’s disease, and what’s therapy and what are enhancements — you need to start thinking about it in a different way. Now, the measuring stick is different.”
Is My Digital Twin ... Me?
Digital twins come with plenty of caveats and considerations — data ownership, predictive medicine and cybersecurity, among them.
But thankfully, digital twins rivaling their physical counterparts aren’t among those concerns. Each expert I spoke with assured me that, no matter how close a facsimile my digital twin became, she wouldn’t be me. Someday she might attend a telehealth appointment on my behalf, but she won’t impersonate me on work calls or send emails to my boyfriend.
(For the record, digital twins in manufacturing spaces are getting better and better at passing modified Turing Tests, Grieves said, so my worry here isn’t entirely unfounded.)
In a way, Harvard University astronomer Avi Loeb asked a similar question when he surmised that if human DNA could be digitized and shot out into the galaxy, it could one day be found and reanimated.
Would the resulting beings be humans, Greenbaum asked? Or are we more than the sum of the information we contain?
He likes to believe we are. Our one-to-one digital twins, should they ever exist, would almost certainly be missing something ... although it’s tough to put his finger on what.
“It’s that certain je ne sais quoi,” he said. “That thing that makes us. Whatever that is.”