Can AI Predict the Quantum Universe?

AI promises to revolutionize the way we do science, which raises a central technological question of our time: Can classical AI understand all natural phenomena, or are some fundamentally beyond its reach? Many proponents of artificial intelligence argue that any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm, implying that AI is a universal and sufficient tool for science.

The word “classical” is important here to contrast with quantum computation. Nature is quantum mechanical, and the insights of Shor’s algorithm [1] along with quantum error correction [2,3,4] teach us that there are quantum systems, at least ones that have been heavily engineered, that can have trajectories that are fundamentally unpredictable1 by any classical algorithm, including AI. This opens the possibility that there are complex quantum phenomena occurring naturally in our universe where classical AI is insufficient, and we need a quantum computer in order to model them.

This essay uses the perspective of computational complexity to unpack this nuanced question. We begin with quantum sampling, arguing that despite clear quantum supremacy, it does not represent a real hurdle for AI to predict quantum phenomena. We then shift to the major unsolved problems in quantum physics and quantum chemistry, examining how quantum computers could empower AI in these domains. Finally, we’ll consider the possibility of truly complex quantum signals in nature, where quantum computers might prove essential for prediction itself.


Quantum Sampling

In 2019 Google demonstrated quantum supremacy on a digital quantum device [5], and in 2024 their latest chip performed a task in minutes where our best classical computers would take 10^25 years [6]. The task they performed is to prepare a highly entangled many-body quantum state and to sample from the corresponding distribution over classical configurations. Quantum supremacy on such sampling problems is on firm ground, with results in complexity theory backing up the experimental claims [7].2 Moreover, the classical hardness of quantum sampling appears to be generic in quantum physics. A wide range of quantum systems will generate highly entangled many-body states where sampling becomes classically hard.

However, quantum sampling alone does not refute the universality of classical AI. The output of quantum sampling often appears completely featureless, which cannot be verified by any classical or quantum algorithm, or by any process in our universe for that matter. For example, running the exact same sampling task a second time will produce a list of configurations that will appear unrelated to the original. In order for a phenomenon to be subject to scientific prediction, there must be an experiment that can confirm or deny the prediction. So if quantum sampling has no features that can be experimentally verified, there is nothing to predict, and no pattern for the AI to discover and model.

Quantum Chemistry and Condensed Matter Physics

There are many unsolved problems in quantum chemistry and condensed matter physics that are inaccessible using our best classical simulation algorithms and supercomputers. For example, these occur in the strongly correlated regime of electronic structure in quantum chemistry, and around low-temperature phase transitions of condensed matter systems. We do not understand the electronic structure of FeMoco, the molecule responsible for nitrogen fixation in the nitrogenase enzyme, nor do we understand the phase diagram of the 2D Fermi-Hubbard lattice and whether or not it exhibits superconductivity.

It is possible there are no fundamental barriers for a sufficiently advanced AI to solve these problems. Researchers in the field have achieved major breakthroughs using neural networks to predict complex biological structures like protein folding. One could imagine similar specialized AI models that predict the electronic structure of molecules, or that predict quantum phases of matter. Perhaps the main reason it is currently out of reach is a lack of sufficient training data. Here lies a compelling opportunity for quantum computing: The only feasible way to generate an abundance of accurate training data may be to use a quantum computer, since physical experiments are too difficult, too unreliable, and too slow.

How should we view these problems in physics and chemistry from the perspective of computational complexity? Physicists and chemists often consider systems with a fixed number of parameters, or even single instances. Although computing physical quantities may be extremely challenging, single instances cannot form computationally hard problems, since ultimately only a constant amount of resources is required to solve it. Systems with a fixed number of parameters often behave similarly, since physical quantities tend to depend smoothly on the parameters which allows for extrapolation and learning [8]. Here we can recognize a familiar motif from machine learning: Ab initio prediction is challenging, but prediction becomes efficient after learning from data. Quantum computers are useful for generating training data, but then AI is able to learn from this data and perform efficient inference.

Truly Complex Quantum Signals

While AI might be able to learn much of the patterns of physics and chemistry from quantum-generated data, there remains a deeper possibility: The quantum universe may produce patterns that AI cannot compress and understand. If there are quantum systems that display signals that are truly classically complex, then predicting the pattern will require a quantum computer at inference time, not only in training.

We’ll now envision how such a signal could arise. Imagine a family of quantum systems of arbitrary size N, and at each size N there is a number of independent parameters that is polynomial in N, for example the coefficients of a Hamiltonian or the rotation angles of a quantum circuit. Suppose the system has some physical feature whose signal we would like to compute as a function of the parameters, and this signal has the following properties:

  • (Signal) There is a quantum algorithm that efficiently computes the signal. For example, the signal cannot be exponentially small in N.
  • (Verifiable) The signal is verifiable, at least by an ideal quantum computer. For example, the task could be to compute an expectation value.
  • (Typically complex) When the parameters are chosen randomly, the signal is computationally hard to classically compute in the average case.

If these properties hold, then it’s possible that no machine learning model using a polynomial amount of classical compute can perform the task, even with the help of training data.

The requirement of verifiability by a quantum process ensures that the signal being computed is a robust phenomenon where there is some “fact of the matter”, and a prediction can be confirmed or denied by nature. For example, this holds for any task where the output is the expectation value of some observable. The average-case hardness ensures that hard instances really exist and can be easily generated, rather than only existing in some abstract worst-case that cannot be instantiated.

There is a connection between the verifiability of a computation and its utility to us. Suppose we use a computer to help us design a high-temperature superconductor. If our designed material indeed works as a high-temperature superconductor when fabricated, this forms a verification of the predictions made by our computer. Utility implies verifiability, and likewise, unverifiable computations cannot be useful. However, since nature is quantum, a computation need not be classically verifiable in order to be useful, but only quantumly verifiable. In our high-temperature superconductor example, nature has verified our computer by performing a quantum process.

Making progress

John Preskill’s “entanglement frontier” seeks to understand the collective behavior of many interacting quantum particles [10]. In order to shed light on the fundamental limits of classical AI and the utility of quantum computers in this regime, we must understand if the exponential Hilbert space of quantum theory remains mostly hidden, or if it reveals itself in observable phenomena. The search for classically complex signals forms an exciting research program for making progress. Google recently performed the first demonstration of a classically complex signal on a quantum device: The out-of-time-order correlators3 of random quantum circuits [9]. We can seek to find more such examples, first in abstract models, and then in the real world, to understand how abundant they are in nature.

  1. Under widely accepted cryptographic assumptions. ↩︎
  2. Classical computers cannot perform quantum sampling unless the polynomial hierarchy collapses. ↩︎
  3. More precisely, the higher moments of the out-of-time-order correlator. ↩︎

References

[1] Shor, Peter W. “Algorithms for quantum computation: discrete logarithms and factoring.” Proceedings 35th annual symposium on foundations of computer science. Ieee, 1994.

[2] Shor, Peter W. “Scheme for reducing decoherence in quantum computer memory.” Physical review A 52.4 (1995): R2493.

[3] Shor, Peter W. “Fault-tolerant quantum computation.” Proceedings of 37th conference on foundations of computer science. IEEE, 1996.

[4] Kitaev, A. Yu. “Fault-tolerant quantum computation by anyons.” Annals of physics 303.1 (2003): 2-30.

[5] Arute, Frank, et al. “Quantum supremacy using a programmable superconducting processor.” Nature 574.7779 (2019): 505-510.

[6] Morvan, Alexis, et al. “Phase transitions in random circuit sampling.” Nature 634.8033 (2024): 328-333.

[7] Aaronson, Scott, and Alex Arkhipov. “The computational complexity of linear optics.” Proceedings of the forty-third annual ACM symposium on Theory of computing. 2011.

[8] Huang, Hsin-Yuan, et al. “Provably efficient machine learning for quantum many-body problems.” Science 377.6613 (2022): eabk3333.

[9] Abanin, Dmitry A., et al. “Constructive interference at the edge of quantum ergodic dynamics.” arXiv preprint arXiv:2506.10191 (2025).

[10] Preskill, John. “Quantum computing and the entanglement frontier.” arXiv preprint arXiv:1203.5813 (2012).

Quantum Algorithms: A Call To Action

Quantum computing finds itself in a peculiar situation. On the technological side, after billions of dollars and decades of research, working quantum computers are nearing fruition. But still, the number one question asked about quantum computers is the same as it was two decades ago: What are they good for? The honest answer reveals an elephant in the room: We don’t fully know yet. For theorists like me, this is an opportunity, a call to action.

Technological momentum

Suppose we do not have quantum computers in a few decades time. What will be the reason? It’s unlikely that we’ll encounter some insurmountable engineering obstacle. The theoretical basis of quantum error-correction is solid, and several platforms are approaching or below the error-correction threshold (Harvard, Yale, Google). Experimentalists believe today’s technology can scale to 100 logical qubits and 10^6 gates—the megaquop era. If mankind spends $100 billion over the next few decades, it’s likely we could build a quantum computer.

A more concerning reason that quantum computing might fail is that there is not enough incentive to justify such a large investment in R&D and infrastructure. Let’s make a comparison to nuclear fusion. Like quantum hardware, they have challenging science and engineering problems to solve. However, if a nuclear fusion lab were to succeed in their mission of building a nuclear fusion reactor, the application would be self-evident. This is not the case for quantum computing—it is a sledgehammer looking for nails to hit.

Nevertheless, industry investment in quantum computing is currently accelerating. To maintain the momentum, it is critical to match investment growth and hardware progress with algorithmic capabilities. The time to discover quantum algorithms is now.

Empowered theorists

Theory research is forward-looking and predictive. Theorists such as Geoffrey Hinton laid the foundations of the current AI revolution. But decades later, with an abundance of computing hardware, AI has become much more of an empirical field. I look forward to the day that quantum hardware reaches a state of abundance, but that day is not yet here.

Today, quantum computing is an area where theorists have extraordinary leverage. A few pages of mathematics by Peter Shor inspired thousands of researchers, engineers and investors to join the field. Perhaps another few pages by someone reading this blog will establish a future of world-altering impact for the industry. There are not many places where mathematics has such potential for influence. An entire community of experimentalists, engineers, and businesses are looking to the theorists for ideas.

The Challenge

Traditionally, it is thought that the ideal quantum algorithm would exhibit three features. First, it should be provably correct, giving a guarantee that executing the quantum circuit reliably will achieve the intended outcome. Second, the underlying problem should be classically hard—the output of the quantum algorithm should be computationally hard to replicate with a classical algorithm. Third, it should be useful, with the potential to solve a problem of interest in the real world. Shor’s algorithm comes close to meeting all of these criteria. However, demanding all three in an absolute fashion may be unnecessary and perhaps even counterproductive to progress.

Provable correctness is important, since today we cannot yet empirically test quantum algorithms on hardware at scale. But what degree of evidence should we require for classical hardness? Rigorous proof of classical hardness is currently unattainable without resolving major open problems like P vs NP, but there are softer forms of proof, such as reductions to well-studied classical hardness assumptions.

I argue that we should replace the ideal of provable hardness with a more pragmatic approach: The quantum algorithm should outperform the best known classical algorithm that produces the same output by a super-quadratic speedup.1 Emphasizing provable classical hardness might inadvertently impede the discovery of new quantum algorithms, since a truly novel quantum algorithm could potentially introduce a new classical hardness assumption that differs fundamentally from established ones. The back-and-forth process of proposing and breaking new assumptions is a productive direction that helps us triangulate where quantum advantage lies.

It may also be unproductive to aim directly at solving existing real-world problems with quantum algorithms. Fundamental computational tasks with quantum advantage are special and we have very few examples, yet they necessarily provide the basis for any eventual quantum application. We should search for more of these fundamental tasks and match them to applications later.

That said, it is important to distinguish between quantum algorithms that could one day provide the basis for a practically relevant computation, and those that will not. In the real world, computations are not useful unless they are verifiable or at least repeatable. For instance, consider a quantum simulation algorithm that computes a physical observable. If two different quantum computers run the simulation and get the same answer, one can be confident that this answer is correct and that it makes a robust prediction about the world. Some problems such as factoring are naturally easy to verify classically, but we can set the bar even lower: The output of a useful quantum algorithm should at least be repeatable by another quantum computer.

There is a subtle fourth requirement of paramount importance that is often overlooked, captured by the following litmus test: If given a quantum computer tomorrow, could you implement your quantum algorithm? In order to do so, you need not only a quantum algorithm but also a distribution over its inputs on which to run it. Classical hardness must then be judged in the average case over this distribution of inputs, rather than in the worst case.

I’ll end this section with a specific caution regarding quantum algorithms whose output is the expectation value of an observable. A common reason these proposals fail to be classically hard is that the expectation value exponentially concentrates over the distribution of inputs. When this happens, a trivial classical algorithm can replicate the quantum result by simply outputting the concentrated (typical) value for every input. To avoid this, we must seek ensembles of quantum circuits whose expectation values exhibit meaningful variation and sensitivity to different inputs.

We can crystallize these priorities into the following challenge:

The Challenge
Find a quantum algorithm and a distribution over its inputs with the following features:
— (Provable correctness.) The quantum algorithm is provably correct.
— (Classical hardness.) The quantum algorithm outperforms the best known classical algorithm that performs the same task by a super-quadratic speedup, in the average-case over the distribution of inputs.
— (Potential utility.) The output is verifiable, or at least repeatable.

Examples and non-examples

CategoryClassically verifiableQuantumly repeatablePotentially usefulProvable classical hardnessExamples
Search problemYesYesYesNoShor ‘99

Regev’s reduction: CLZ22, YZ24, Jor+24

Planted inference: Has20, SOKB24
Compute a valueNoYesYesNoCondensed matter physics?

Quantum chemistry?
Proof of quantumnessYes, with keyYes, with respect to keyNoYes, under crypto assumptionsBCMVV21
SamplingNoNoNoAlmost, under complexity assumptionsBJS10, AA11, Google ‘20
We can categorize quantum algorithms by the form of their output. First, there are quantum algorithms for search problems, which produce a bitstring satisfying some constraints. This could be the prime factors of a number, a planted feature in some dataset, or the solution to an optimization problem. Next, there are quantum algorithms that compute a value to some precision, for example the expectation value of some physical observable. Then there are proofs of quantumness, which involve a verifier who generates a test using some hidden key, and the key can be used to verify the output. Finally, there are quantum algorithms which sample from some distribution.

Hamiltonian simulation is perhaps the most widely heralded source of quantum utility. Physics and chemistry contain many quantities that Nature computes effortlessly, yet remain beyond the reach of even our best classical simulations. Quantum computation is capable of simulating Nature directly, giving us strong reason to believe that quantum algorithms can compute classically-hard quantities.

There are already many examples where a quantum computer could help us answer an unsolved scientific question, like determining the phase diagram of the Hubbard model or the ground energy of FeMoCo. These undoubtedly have scientific value. However, they are isolated examples, whereas we would like evidence that the pool of quantum-solvable questions is inexhaustible. Can we take inspiration from strongly correlated physics to write down a concrete ensemble of Hamiltonian simulation instances where there is a classically-hard observable? This would gather evidence for the sustained, broad utility of quantum simulation, and would also help us understand where and how quantum advantage arises.

Over in the computer science community, there has been a lot of work on oracle separations such as welded trees and forrelation, which should give us confidence in the abilities of quantum computers. Can we instantiate these oracles in a way that pragmatically remains classically hard? This is necessary in order to pass our earlier litmus test of being ready to run the quantum algorithm tomorrow.

In addition to Hamiltonian simulation, there are several other broad classes of quantum algorithms, including quantum algorithms for linear systems of equations and differential equations, variational quantum algorithms for machine learning, and quantum algorithms for optimization. These frameworks sometimes come with proofs of BQP-completeness.

The issue with these broad frameworks is that they often do not specify a distribution over inputs. Can we find novel ensembles of inputs to these frameworks which exhibit super-quadratic speedups? BQP-completeness shows that one has translated the notion of quantum computation into a different language, which allows one to embed an existing quantum algorithm such as Shor’s algorithm into your framework. But in order to discover a new quantum algorithm, you must find an ensemble of BQP computations which does not arise from Shor’s algorithm.

Table I claims that sampling tasks alone are not useful since they are not even quantumly repeatable. One may wonder if sampling tasks could be useful in some way. After all, classical Monte Carlo sampling algorithms are widely used in practice. However, applications of sampling typically use samples to extract meaningful information or specific features of the underlying distribution. For example, Monte Carlo sampling can be used to evaluate integrals in Bayesian inference and statistical physics. In contrast, samples obtained from random quantum circuits lack any discernible features. If a collection of quantum algorithms generated samples containing meaningful signals from which one could extract classically hard-to-compute values, those algorithms would effectively transition into the compute a value category.

Table I also claims that proofs of quantumness are not useful. This is not completely true—one potential application is generating certifiable randomness. However, such applications are generally cryptographic rather than computational in nature. Specifically, proofs of quantumness cannot help us solve problems or answer questions whose solutions we do not already know.

Finally, there are several exciting directions proposing applications of quantum technologies in sensing and metrology, communication, learning with quantum memory, and streaming. These are very interesting, and I hope that mankind’s second century of quantum mechanics brings forth all flavors of capabilities. However, the technological momentum is mostly focused on building quantum computers for the purpose of computational advantage, and so this is where breakthroughs will have the greatest immediate impact.

Don’t be too afraid

At the annual QIP conference, only a handful of papers out of hundreds each year attempt to advance new quantum algorithms. Given the stakes, why is this number so low? One common explanation is that quantum algorithm research is simply too difficult. Nevertheless, we have seen substantial progress in quantum algorithms in recent years. After an underwhelming lack of end-to-end proposals with the potential for utility between the years 2000 and 2020, Table I exhibits several breakthroughs from the past 5 years.

In between blind optimism and resigned pessimism, embracing a mission-driven mindset can propel our field forward. We should allow ourselves to adopt a more exploratory, scrappier approach: We can hunt for quantum advantages in yet-unstudied problems or subtle signals in the third decimal place. The bar for meaningful progress is lower than it might seem, and even incremental advances are valuable. Don’t be too afraid!

  1. Quadratic speedups are widespread but will not form the basis of practical quantum advantage due to the overheads associated with quantum error-correction. ↩︎