In the history of AI, two remarkable moments stand out: the success of AlphaGo and the advent of ChatGPT. These two milestones serve as instructive events for analyzing how far we are from truly useful quantum computing.
AlphaGo’s 2016 triumph over world champion Go player Lee Sedol marked a pivotal point in AI development. More than a victory in a complex board game, it demonstrated that AI could generate superior results in one specific, albeit complex task. It gave us a taste of truly useful, task-specific AI applications, such as the protein folding tool AlphaFold, which was introduced two years later.
The ChatGPT moment was different. Launched in 2022, ChatGPT showcased AI’s capability to understand context and generate coherent responses, making it immediately useful to a wide range of applications. Although AlphaGo was tightly focused on one subject, ChatGPT has broad applicability.
AI took many years to develop. The technology’s birth is commonly traced to a conference at Dartmouth College in 1956, 60 years before AlphaGo. How many years will we have to wait before the AlphaGo and ChatGPT moments for quantum computing, which is itself an exciting technology that promises to solve many problems that are intractable for its classical counterpart?
What Is Quantum Advantage?
The term quantum advantage, also called quantum supremacy, refers to a theoretical point at which a quantum computer becomes able to accurately solve a computational problem that no classical computer could do in any reasonable amount of time. Achieving this milestone will broadly indicate that quantum computing has surpassed its classical counterpart.
Where Is Quantum Computing Today?
Perhaps first we should ask how far we’ve come in recent years. Most quantum computer users access a computer through a public cloud, such as those from IBM, Amazon, or Microsoft. When looking at quantum specifications, the number of qubits (quantum bits) is an important parameter to compare different vendors. Five years ago, the largest publicly accessible quantum computer had 20 qubits and only two or three vendors made their computers accessible.
Today, however, we’ve made significant progress. The largest accessible quantum computers have hundreds of qubits, and more than a dozen vendors offer public access. In the last five years, several quantum computing companies went public, and VC firms invested approximately $5 billion in quantum computing companies. Quantum computing is progressing much faster than AI did in its early days.
A significant milestone was achieved In 2019 when Google published what became known as the “quantum supremacy” experiment. Its Sycamore computer completed a task in 200 seconds that, according to Google, would have taken the most powerful classical computer 10,000 years to complete. Did that qualify as the quantum AlphaGo moment? Probably not. Google’s demonstration used an algorithm specifically designed to showcase the capabilities of their computer but was of no practical computational use.
When Will Quantum Computing Break Through?
What would be the AlphaGo moment in quantum computing? It would likely be a breakthrough where quantum computers solve a complex yet specific problem well beyond the capabilities of classical computers.
Based on these criteria, it seems that we’re getting close. The energy company Aramco recently disclosed that it’s putting a quantum algorithm into production, moving it from a “sandbox” test environment where most quantum computing projects operate. The algorithm helps decode subsurface imaging signals, a form of so-called “ultrasound for the earth” used to uncover minerals.
Likewise, Deloitte Consulting recently reported that a quantum machine learning algorithm, which used a technique called quantum reservoir computing, produced superior results to classical machine learning algorithms that operated on the same data. IBM and UC Berkeley published recent experiments on the 127-qubit IBM Quantum Eagle processor that demonstrated accurate results in complex physical simulations, surpassing classical approximation methods in certain scenarios. Quantinuum reported early signs of quantum advantage for Monte Carlo simulations.
While these events likely don’t qualify as true quantum advantages, they’re harbingers of things to come. Classical computers can simulate no more than about 50 qubits, and with 100+ qubit computers becoming widely available, reaching provable quantum advantage in a useful algorithm seems to be just a matter of when, not if.
Where Will Quantum Computing Make Its Impact?
It is unclear what field would first benefit from such a quantum advantage. Some vendors are very bullish about the value of quantum computing in machine learning and AI, while others focus on material and pharmaceutical progress or financial and supply chain optimizations. Microsoft published a thorough assessment of their view of which applications will reach quantum advantage first, as well as the required quantum resources. This gives interested customers to ability to estimate how soon they could use quantum computers to their advantage.
For instance, a financial services company may experiment with quantum computers to optimize an asset portfolio comprised of 10 assets. Such a firm might decide that quantum would be truly useful once a portfolio of 500 assets can be efficiently optimized, however. The resource estimation work provides a roadmap as to when this will be possible.
What Will Be Quantum Computing’s ChatGPT Moment?
The quantum ChatGPT moment, on the other hand, is farther away. It might be a general-purpose quantum computer that can solve many problems beyond those that classical machines can simulate. Thus, it would not be purpose-built for one specific problem but could solve several classes of problems. For instance, a quantum computer that can be useful for general-purpose optimizations might be used to optimize production schedules, package delivery routes, container loading, stock portfolio or the location of EV charging stations that yield optimal coverage.
The primary challenge in getting to that moment is scaling up quantum systems while minimizing errors. This problem is two-dimensional. The first dimension is increasing the number of qubits beyond the classical simulation limit, and the second is creating conditions that allow long calculations to be performed without accumulating excessive errors or losing coherence, which is a critical part of the quantum properties of the system. Indeed, quantum error correction is a key focus of multiple industry and academic groups, all sharing the view that performing sustained calculations is critical to reaching the true usability of quantum systems.
To provide context, classical computers have negligible error rates; perhaps only one calculation in a trillion operations might be flawed. In contrast, today’s quantum computers have much worse error rates. Today, even a 1 percent error (i.e., an error in one of one hundred operations) is considered pretty good in quantum computers. But if a computer errs in one operation out of a hundred, and if an algorithm requires just a hundred operations, it’s almost guaranteed to provide the wrong result. Thus, for quantum computers to be useful, error rates must be dramatically lowered so that longer and more complex calculations can be performed with confidence.
The Future Is Quantum
Although current quantum computers have yet to demonstrate practical applications that significantly outperform classical ones, the pace of innovation and investment in quantum computing suggests that a major breakthrough might not be far off. Collaborations among academia, industry, and government entities are propelling the field forward, raising the possibility that we may witness the quantum computing equivalent of the AlphaGo or ChatGPT moment within the next decade.