Beyond NISQ: The Megaquop Machine

On December 11, I gave a keynote address at the Q2B 2024 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here. The video of the talk is here.

NISQ and beyond

I’m honored to be back at Q2B for the 8th year in a row.

The Q2B conference theme is “The Roadmap to Quantum Value,” so I’ll begin by showing a slide from last year’s talk. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses a daunting challenge for our field and for the quantum industry.

We are in the NISQ era. And NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

NISQ, meaning Noisy Intermediate-Scale Quantum, is a deliberately vague term. By design, it has no precise quantitative meaning, but it is intended to convey an idea: We now have quantum machines such that brute force simulation of what the quantum machine does is well beyond the reach of our most powerful existing conventional computers. But these machines are not error-corrected, and noise severely limits their computational power.

In the future we can envision FASQ* machines, Fault-Tolerant Application-Scale Quantum computers that can run a wide variety of useful applications, but that is still a rather distant goal. What term captures the path along the road from NISQ to FASQ? Various terms retaining the ISQ format of NISQ have been proposed [here, here, here], but I would prefer to leave ISQ behind as we move forward, so I’ll speak instead of a megaquop or gigaquop machine and so on meaning one capable of executing a million or a billion quantum operations, but with the understanding that mega means not precisely a million but somewhere in the vicinity of a million.

Naively, a megaquop machine would have an error rate per logical gate of order 10^{-6}, which we don’t expect to achieve anytime soon without using error correction and fault-tolerant operation. Or maybe the logical error rate could be somewhat larger, as we expect to be able to boost the simulable circuit volume using various error mitigation techniques in the megaquop era just as we do in the NISQ era. Importantly, the megaquop machine would be capable of achieving some tasks beyond the reach of classical, NISQ, or analog quantum devices, for example by executing circuits with of order 100 logical qubits and circuit depth of order 10,000.

What resources are needed to operate it? That depends on many things, but a rough guess is that tens of thousands of high-quality physical qubits could suffice. When will we have it? I don’t know, but if it happens in just a few years a likely modality is Rydberg atoms in optical tweezers, assuming they continue to advance in both scale and performance.

What will we do with it? I don’t know, but as a scientist I expect we can learn valuable lessons by simulating the dynamics of many-qubit systems on megaquop machines. Will there be applications that are commercially viable as well as scientifically instructive? That I can’t promise you.

The road to fault tolerance

To proceed along the road to fault tolerance, what must we achieve? We would like to see many successive rounds of accurate error syndrome measurement such that when the syndromes are decoded the error rate per measurement cycle drops sharply as the code increases in size. Furthermore, we want to decode rapidly, as will be needed to execute universal gates on protected quantum information. Indeed, we will want the logical gates to have much higher fidelity than physical gates, and for the logical gate fidelities to improve sharply as codes increase in size. We want to do all this at an acceptable overhead cost in both the number of physical qubits and the number of physical gates. And speed matters — the time on the wall clock for executing a logical gate should be as short as possible.

A snapshot of the state of the art comes from the Google Quantum AI team. Their recently introduced Willow superconducting processor has improved transmon lifetimes, measurement errors, and leakage correction compared to its predecessor Sycamore. With it they can perform millions of rounds of surface-code error syndrome measurement with good stability, each round lasting about a microsecond. Most notably, they find that the logical error rate per measurement round improves by a factor of 2 (a factor they call Lambda) when the code distance increases from 3 to 5 and again from 5 to 7, indicating that further improvements should be achievable by scaling the device further. They performed accurate real-time decoding for the distance 3 and 5 codes. To further explore the performance of the device they also studied the repetition code, which corrects only bit flips, out to a much larger code distance. As the hardware continues to advance we hope to see larger values of Lambda for the surface code, larger codes achieving much lower error rates, and eventually not just quantum memory but also logical two-qubit gates with much improved fidelity compared to the fidelity of physical gates.

Last year I expressed concern about the potential vulnerability of superconducting quantum processors to ionizing radiation such as cosmic ray muons. In these events, errors occur in many qubits at once, too many errors for the error-correcting code to fend off. I speculated that we might want to operate a superconducting processor deep underground to suppress the muon flux, or to use less efficient codes that protect against such error bursts.

The good news is that the Google team has demonstrated that so-called gap engineering of the qubits can reduce the frequency of such error bursts by orders of magnitude. In their studies of the repetition code they found that, in the gap-engineered Willow processor, error bursts occurred about once per hour, as opposed to once every ten seconds in their earlier hardware.  Whether suppression of error bursts via gap engineering will suffice for running deep quantum circuits in the future is not certain, but this progress is encouraging. And by the way, the origin of the error bursts seen every hour or so is not yet clearly understood, which reminds us that not only in superconducting processors but in other modalities as well we are likely to encounter mysterious and highly deleterious rare events that will need to be understood and mitigated.

Real-time decoding

Fast real-time decoding of error syndromes is important because when performing universal error-corrected computation we must frequently measure encoded blocks and then perform subsequent operations conditioned on the measurement outcomes. If it takes too long to decode the measurement outcomes, that will slow down the logical clock speed. That may be a more serious problem for superconducting circuits than for other hardware modalities where gates can be orders of magnitude slower.

For distance 5, Google achieves a latency, meaning the time from when data from the final round of syndrome measurement is received by the decoder until the decoder returns its result, of about 63 microseconds on average. In addition, it takes about another 10 microseconds for the data to be transmitted via Ethernet from the measurement device to the decoding workstation. That’s not bad, but considering that each round of syndrome measurement takes only a microsecond, faster would be preferable, and the decoding task becomes harder as the code grows in size.

Riverlane and Rigetti have demonstrated in small experiments that the decoding latency can be reduced by running the decoding algorithm on FPGAs rather than CPUs, and by integrating the decoder into the control stack to reduce communication time. Adopting such methods may become increasingly important as we scale further. Google DeepMind has shown that a decoder trained by reinforcement learning can achieve a lower logical error rate than a decoder constructed by humans, but it’s unclear whether that will work at scale because the cost of training rises steeply with code distance. Also, the Harvard / QuEra team has emphasized that performing correlated decoding across multiple code blocks can reduce the depth of fault-tolerant constructions, but this also increases the complexity of decoding, raising concern about whether such a scheme will be scalable.

Trading simplicity for performance

The Google processors use transmon qubits, as do superconducting processors from IBM and various other companies and research groups. Transmons are the simplest superconducting qubits and their quality has improved steadily; we can expect further improvement with advances in materials and fabrication. But a logical qubit with very low error rate surely will be a complicated object due to the hefty overhead cost of quantum error correction. Perhaps it is worthwhile to fashion a more complicated physical qubit if the resulting gain in performance might actually simplify the operation of a fault-tolerant quantum computer in the megaquop regime or well beyond. Several versions of this strategy are being pursued.

One approach uses cat qubits, in which the encoded 0 and 1 are coherent states of a microwave resonator, well separated in phase space, such that the noise afflicting the qubit is highly biased. Bit flips are exponentially suppressed as the mean photon number of the resonator increases, while the error rate for phase flips induced by loss from the resonator increases only linearly with the photon number. This year the AWS team built a repetition code to correct phase errors for cat qubits that are passively protected against bit flips, and showed that increasing the distance of the repetition code from 3 to 5 slightly improves the logical error rate. (See also here.)

Another helpful insight is that error correction can be more effective if we know when and where the errors occur in a quantum circuit. We can apply this idea using a dual rail encoding of the qubits. With two microwave resonators, for example, we can encode a qubit by placing a single photon in either the first resonator (the 10) state, or the second resonator (the 01 state). The dominant error is loss of a photon, causing either the 01 or 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred without disturbing a coherent superposition of 01 and 10. In a device built by the Yale / QCI team, loss errors are detected over 99% of the time and all undetected errors are relatively rare. Similar results were reported by the AWS team, encoding a dual-rail qubit in a pair of transmons instead of resonators.

Another idea is encoding a finite-dimensional quantum system in a state of a resonator that is highly squeezed in two complementary quadratures, a so-called GKP encoding. This year the Yale group used this scheme to encode 3-dimensional and 4-dimensional systems with decay rate better by a factor of 1.8 than the rate of photon loss from the resonator. (See also here.)

A fluxonium qubit is more complicated than a transmon in that it requires a large inductance which is achieved with an array of Josephson junctions, but it has the advantage of larger anharmonicity, which has enabled two-qubit gates with better than three 9s of fidelity, as the MIT team has shown.

Whether this trading of simplicity for performance in superconducting qubits will ultimately be advantageous for scaling to large systems is still unclear. But it’s appropriate to explore such alternatives which might pay off in the long run.

Error correction with atomic qubits

We have also seen progress on error correction this year with atomic qubits, both in ion traps and optical tweezer arrays. In these platforms qubits are movable, making it possible to apply two-qubit gates to any pair of qubits in the device. This opens the opportunity to use more efficient coding schemes, and in fact logical circuits are now being executed on these platforms. The Harvard / MIT / QuEra team sampled circuits with 48 logical qubits on a 280-qubit device –- that big news broke during last year’s Q2B conference. Atom computing and Microsoft ran an algorithm with 28 logical qubits on a 256-qubit device. Quantinuum and Microsoft prepared entangled states of 12 logical qubits on a 56-qubit device.

However, so far in these devices it has not been possible to perform more than a few rounds of error syndrome measurement, and the results rely on error detection and postselection. That is, circuit runs are discarded when errors are detected, a scheme that won’t scale to large circuits. Efforts to address these drawbacks are in progress. Another concern is that the atomic movement slows the logical cycle time. If all-to-all coupling enabled by atomic movement is to be used in much deeper circuits, it will be important to speed up the movement quite a lot.

Toward the megaquop machine

How can we reach the megaquop regime? More efficient quantum codes like those recently discovered by the IBM team might help. These require geometrically nonlocal connectivity and are therefore better suited for Rydberg optical tweezer arrays than superconducting processors, at least for now. Error mitigation strategies tailored for logical circuits, like those pursued by Qedma, might help by boosting the circuit volume that can be simulated beyond what one would naively expect based on the logical error rate. Recent advances from the Google team, which reduce the overhead cost of logical gates, might also be helpful.

What about applications? Impactful applications to chemistry typically require rather deep circuits so are likely to be out of reach for a while yet, but applications to materials science provide a more tempting target in the near term. Taking advantage of symmetries and various circuit optimizations like the ones Phasecraft has achieved, we might start seeing informative results in the megaquop regime or only slightly beyond.

As a scientist, I’m intrigued by what we might conceivably learn about quantum dynamics far from equilibrium by doing simulations on megaquop machines, particularly in two dimensions. But when seeking quantum advantage in that arena we should bear in mind that classical methods for such simulations are also advancing impressively, including in the past year (for example, here and here).

To summarize, advances in hardware, control, algorithms, error correction, error mitigation, etc. are bringing us closer to megaquop machines, raising a compelling question for our community: What are the potential uses for these machines? Progress will require innovation at all levels of the stack.  The capabilities of early fault-tolerant quantum processors will guide application development, and our vision of potential applications will guide technological progress. Advances in both basic science and systems engineering are needed. These are still the early days of quantum computing technology, but our experience with megaquop machines will guide the way to gigaquops, teraquops, and beyond and hence to widely impactful quantum value that benefits the world.

I thank Dorit Aharonov, Sergio Boixo, Earl Campbell, Roland Farrell, Ashley Montanaro, Mike Newman, Will Oliver, Chris Pattison, Rob Schoelkopf, and Qian Xu for helpful comments.

*The acronym FASQ was suggested to me by Andrew Landahl.

The megaquop machine (image generated by ChatGPT.
The megaquop machine (image generated by ChatGPT).

Quantum Error Correction with Molecules

In the previous blog post (titled, “On the Coattails of Quantum Supremacy“) we started with Google and ended up with molecules! I also mentioned a recent paper by John Preskill, Jake Covey, and myself (see also this videoed talk) where we assume that, somewhere in the (near?) future, experimentalists will be able to construct quantum superpositions of several orientations of molecules or other rigid bodies. Next, I’d like to cover a few more details on how to construct error-correcting codes for anything from classical bits in your phone to those future quantum computers, molecular or otherwise.

Classical error correction: the basics

Error correction is concerned with the design of an encoding that allows for protection against noise. Let’s say we want to protect one classical bit, which is in either “0” or “1”. If the bit is say in “0”, and the environment (say, the strong magnetic field from a magnet you forgot was laying next to your hard drive) flipped it to “1” without our knowledge, an error would result (e.g., making your phone think you swiped right!)

Now let’s encode our single logical bit into three physical bits, whose 2^3=8 possible states are represented by the eight corners of the cube below. Let’s encode the logical bit as “0” —> 000 and “1” —> 111, corresponding to the corners of the cube marked by the black and white ball, respectively. For our (local) noise model, we assume that flips of only one of the three physical bits are more likely to occur than flips of two or three at the same time.

Error correction is, like many Hollywood movies, an origin story. If, say, the first bit flips in our above code, the 000 state is mapped to 100, and 111 is mapped to 011. Since we have assumed that the most likely error is a flip of one of the bits, we know upon observing that 100 must have come from the clean 000, and 011 from 111. Thus, in either case of the logical bit being “0” or “1”, we can recover the information by simply observing which state the majority of the bits are in. The same things happen when the second or third bits flip. In all three cases, the logical “0” state is mapped to one of its three neighboring points (above, in blue) while the logical “1” is mapped to its own three points, which, crucially, are distinct from the neighbors of “0”. The set of points \{000,100,010,001\} that are closer to 000 than to 111 is called a Voronoi tile.

Now, let’s adapt these ideas to molecules. Consider the rotational states of a dumb-bell molecule consisting of two different atoms. (Let’s assume that we have frozen this molecule to the point that the vibration of the inter-atomic bond is limited, essentially creating a fixed distance between the two atoms.) This molecule can orient itself in any direction, and each such orientation can be represented as a point \mathbf{v} on the surface of a sphere. Now let us encode a classical bit using the north and south poles of this sphere (represented in the picture below as a black and a white ball, respectively). The north pole of the sphere corresponds to the molecule being parallel to the z-axis, while the south pole corresponds to the molecule being anti-parallel.

This time, the noise consists of small shifts in the molecule’s orientation. Clearly, if such shifts are small, the molecule just wiggles a bit around the z-axis. Such wiggles still allow us to infer that the molecule is (mostly) parallel and anti-parallel to the axis, as long as they do not rotate the molecule all the way past the equator. Upon such correctable rotations, the logical “0” state — the north pole — is mapped to a point in the northern hemisphere, while logical “1” — the south pole — is mapped to a point in the southern hemisphere. The northern hemisphere forms a Voronoi tile of the logical “0” state (blue in the picture), which, along with the corresponding tile of the logical “1” state (the southern hemisphere), tiles the entire sphere.

Quantum error correction

To upgrade these ideas to the quantum realm, recall that this time we have to protect superpositions. This means that, in addition to shifting our quantum logical state to other states as before, noise can also affect the terms in the superposition itself. Namely, if, say, the superposition is equal — with an amplitude of +1/\sqrt{2} in “0” and +1/\sqrt{2} in “1” — noise can change the relative sign of the superposition and map one of the amplitudes to -1/\sqrt{2}. We didn’t have to worry about such sign errors before, because our classical information would always be the definite state of “0” or “1”. Now, there are two effects of noise to worry about, so our task has become twice as hard!

Not to worry though. In order to protect against both sources of noise, all we need to do is effectively stagger the above constructions. Now we will need to design a logical “0” state which is itself a superposition of different points, with each point separated from all of the points that are superimposed to make the logical “1” state.

Diatomic molecules: For the diatomic molecule example, consider superpositions of all four corners of two antipodal tetrahedra for the two respective logical states.

blog_tet

The logical “0” state for the quantum code is now itself a quantum superposition of orientations of our diatomic molecule corresponding to the four black points on the sphere to the left (the sphere to the right is a top-down view). Similarly, the logical “1” quantum state is a superposition of all orientations corresponding to the white points.

Each orientation (black or white point) present in our logical states rotates under fluctuations in the position of the molecule. However, the entire set of orientations for say logical “0” — the tetrahedron — rotates rigidly under such rotations. Therefore, the region from which we can successfully recover after rotations is fully determined by the Voronoi tile of any one of the corners of the tetrahedron. (Above, we plot the tile for the point at the north pole.) This cell is clearly smaller than the one for classical north-south-pole encoding we used before. However, the tetrahedral code now provides some protection against phase errors — the other type of noise that we need to worry about if we are to protect quantum information. This is an example of the trade-off we must make in order to protect against both types of noise; a licensed quantum mechanic has to live with such trade-offs every day.

Oscillators: Another example of a quantum encoding is the GKP encoding in the phase space of the harmonic oscillator. Here, we have at our disposal the entire two-dimensional plane indexing different values of position and momentum. In this case, we can use a checkerboard approach, superimposing all points at the centers of the black squares for the logical “0” state, and similarly all points at the centers of the white squares for the logical “1”. The region depicting correctable momentum and position shifts is then the Voronoi cell of the point at the origin: if a shift takes our central black point to somewhere inside the blue square, we know (most likely) where that point came from! In solid state circles, the blue square is none other than the primitive or unit cell of the lattice consisting of points making up both of the logical states.

Asymmetric molecules (a.k.a. rigid rotors): Now let’s briefly return to molecules. Above, we considered diatomic molecules that had a symmetry axis, i.e., that were left unchanged under rotations about the axis that connects the two atoms. There are of course more general molecules out there, including ones that are completely asymmetric under any possible (proper) 3D rotation (see figure below for an example).

mol-f0 - blog

BONUS: There is a subtle mistake relating to the geometry of the rotation group in the labeling of this figure. Let me know if you can find it in the comments!

All of the orientations of the asymmetric molecule, and more generally a rigid body, can no longer be parameterized by the sphere. They can be parameterized by the 3D rotation group \mathsf{SO}(3): each orientation of an asymmetric molecule is labeled by the 3D rotation necessary to obtain said orientation from a reference state. Such rotations, and in turn the orientations themselves, are parameterized by an axis \mathbf{v} (around which to rotate) and an angle \omega (by which one rotates). The rotation group \mathsf{SO}(3) luckily can still be viewed by humans on a sheet of paper. Namely, \mathsf{SO}(3) can be thought of as a ball of radius \pi with opposite points identified. The direction of each vector \omega\mathbf{v} lying inside the ball corresponds to the axis of rotation, while the length corresponds to the angle. This may take some time to digest, but it’s not crucial to the story.

So far we’ve looked at codes defined on cubes of bits, spheres, and phase-space lattices. Turns out that even \mathsf{SO}(3) can house similar encodings! In other words, \mathsf{SO}(3) can also be cut up into different Voronoi tiles, which in turn can be staggered to create logical “0” and “1” states consisting of different molecular orientations. There are many ways to pick such states, corresponding to various subgroups of \mathsf{SO}(3). Below, we sketch two sets of black/white points, along with the Voronoi tile corresponding to the rotations that are corrected by each encoding.

Voronoi tiles of the black point at the center of the ball representing the 3D rotation group, for two different molecular codes. This and the Voronoi cells corresponding to the other points tile together to make up the entire ball. 3D printing all of these tiles would make for cool puzzles!

In closing…

Achieving supremacy was a big first step towards making quantum computing a practical and universal tool. However, the largest obstacles still await, namely handling superposition-poisoning noise coming from the ever-curious environment. As quantum technologies advance, other possible routes for error correction are by encoding qubits in harmonic oscillators and molecules, alongside the “traditional” approach of using arrays of physical qubits. Oscillator and molecular qubits possess their own mechanisms for error correction, and could prove useful (granted that the large high-energy space required for the procedures to work can be accessed and controlled). Even though molecular qubits are not yet mature enough to be used in quantum computers, we have at least outlined a blueprint for how some of the required pieces can be built. We are by no means done however: besides an engineering barrier, we need to further develop how to run robust computations on these exotic spaces.

Author’s note: I’d like to acknowledge Jose Gonzalez for helping me immensely with the writing of this post, as well as for drawing the comic panels in the previous post. The figures above were made possible by Mathematica 12.

On the Coattails of Quantum Supremacy

Most readers have by now heard that Google has “achieved” quantum “supremacy”. Notice the only word not in quotes is “quantum”, because unlike previous proposals that have also made some waves, quantumness is mostly not under review here. (Well, neither really are the other two words, but that story has already been covered quite eloquently by John, Scott, and Toby.) The Google team has managed to engineer a device that, although noisy, can do the right thing a large-enough fraction of the time for people to be able to “quantify its quantumness”.

However, the Google device, while less so than previous incarnations, is still noisy. Future devices like it will continue to be noisy. Noise is what makes quantum computers so darn difficult to build; it is what destroys the fragile quantum superpositions that we are trying so hard to protect (remember, unlike a classical computer, we are not protecting things we actually observe, but their superposition).

Protecting quantum information is like taking your home-schooled date (who has lived their entire life in a bunker) to the prom for the first time. It is a fun and necessary part of a healthy relationship to spend time in public, but the price you pay is the possibility that your date will hit it off with someone else. This will leave you abandoned, dancing alone to Taylor Swift’s “You Belong With Me” while crying into your (spiked?) punch.

When the environment corrupts your quantum date.

The high school sweetheart/would-be dance partner in the above provocative example is the quantum superposition — the resource we need for a working quantum computer. You want it all to yourself, but your adversary — the environment — wants it too. No matter how much you try to protect it, you’ll have to observe it eventually (after all, you want to know the answer to your computation). And when you do (take your date out onto the crowded dance floor), you run the risk of the environment collapsing the information before you do, leaving you with nothing.

Protecting quantum information is also like (modern!) medicine. The fussy patient is the quantum information, stored in delicate superposition, while quantumists are the doctors aiming to prevent the patient from getting sick (or “corrupted”). If our patient incurs say “quasiparticle poisoning”, we first diagnose the patient’s syndromes, and, based on this diagnosis, apply procedures like “lattice surgery” and “state injection” to help our patient successfully recover.

The medical analogy to QEC, noticed first by Daniel Litinski. All terms are actually used in papers. Cartoon by Jose Gonzalez.

Error correction with qubits

Error correction sounds hard, and it should! Not to fear: plenty of very smart people have thought hard about this problem, and have come up with a plan — to redundantly encode the quantum superposition in a way that allows protection from errors caused by noise. Such quantum error-correction is an expansion of the techniques we currently use to protect classical bits in your phone and computer, but now the aim is to protect, not the definitive bit states 0 or 1, but their quantum superpositions. Things are even harder now, as the protection machinery has to do its magic without disturbing the superposition itself (after all, we want our quantum calculation to run to its conclusion and hack your bank).

For example, consider a qubit — the fundamental quantum unit represented by two shelves (which, e.g., could be the ground and excited states of an atom, the absence or presence of a photon in a box, or the zeroth and first quanta of a really cold LC circuit). This qubit can be in any quantum superposition of the two shelves, described by 2 probability amplitudes, one corresponding to each shelf. Observing this qubit will collapse its state onto either one of the shelves, changing the values of the 2 amplitudes. Since the resource we use for our computation is precisely this superposition, we definitely do not want to observe this qubit during our computation. However, we are not the only ones looking: the environment (other people at the prom: the trapping potential of our atom, the jiggling atoms of our metal box, nearby circuit elements) is also observing this system, thereby potentially manipulating the stored quantum state without our knowledge and ruining our computation.

Now consider 50 such qubits. Such a space allows for a superposition with 2^{50} different amplitudes (instead of just 2^1 for the case of a single qubit). We are once again plagued by noise coming from the environment. But what if we now, less ambitiously, want to store only one qubit’s worth of information in this 50-qubit system? Now there is room to play with! A clever choice of how to do this (a.k.a. the encoding) helps protect from the bad environment. 

The entire prospect of building a bona-fide quantum computer rests on this extra overhead or quantum redundancy of using a larger system to encode a smaller one. It sounds daunting at first: if we need 50 physical qubits for each robust logical qubit, then we’d need “I-love-you-3000” physical qubits for 60 logical ones? Yes, this is a fact we all have to live with. But granted we can scale up our devices to that many qubits, there is no fundamental obstacle that prevents us from then using error correction to make next-level computers.

To what extent do we need to protect our quantum superposition from the environment? It would be too ambitious to protect it from a meteor shower. Or a power outage (although that would be quite useful here in California). So what then can we protect against?

Our working answer is local noise — noise that affects only a few qubits that are located near each other in the device. We can never be truly certain if this type of noise is all that our quantum computers will encounter. However, our belief that this is the noise we should focus on is grounded in solid physical principles — that nature respects locality, that affecting things far away from you is harder than making an impact nearby. (So far Google has not reported otherwise, although much more work needs to be done to verify this intuition.)

The harmonic oscillator

In what other ways can we embed our two-shelf qubit into a larger space? Instead of scaling up using many physical qubits, we can utilize a fact that we have so far swept under the rug: in any physical system, our two shelves are already part of an entire bookcase! Atoms have more than one excited state, there can be more than one photon in a box, and there can be more than one quantum in a cold LC circuit. Why don’t we use some of that higher-energy space for our redundant encoding?

The noise in our bookcase will certainly be different, since the structure of the space, and therefore the notion of locality, is different. How to cope with this? The good news is that such a space — the space of the harmonic oscillator — also has a(t least one) natural notion of locality!

Whatever the incarnation, the oscillator has associated with it a position and momentum (different jargon for these quantities may be used, depending on the context, but you can just think of a child on a swing, just quantized). Anyone who knows the joke about Heisenberg getting pulled over, will know that these two quantities cannot be set simultaneously.

Cartoon by Jose Gonzalez.

Nevertheless, local errors can be thought of as small shifts in position or momentum, while nonlocal errors are ones that suddenly shift our bewildered swinging quantized child from one side of the swing to the other.

Armed with a local noise model, we can extend our know-how from multi-qubit land to the oscillator. One of the first such oscillator codes were developed by Gottesman, Kitaev, and Preskill (GKP). Proposed in 2001, GKP encodings posed a difficult engineering challenge: some believed that GKP states could never be realized, that they “did not exist”. In the past few years however, GKP states have been realized nearly simultaneously in two experimental platforms. (Food for thought for the non-believers!)

Parallel to GKP codes, another promising oscillator encoding using cat states is also being developed. This encoding has historically been far easier to create experimentally. It is so far the only experimental procedure achieving the break-even point, at which the actively protected logical information has the same lifetime as the system’s best unprotected degree of freedom.

Can we mix and match all of these different systems? Why yes! While Google is currently trying to build the surface code out of qubits, using oscillators (instead of qubits) for the surface code and encoding said oscillators either in GKP (see related IBM post) [1,2,3] or cat [4,5] codes is something people are seriously considering. There is even more overhead, but the extra information one gets from the correction procedure might make for a more fault-tolerant machine. With all of these different options being explored, it’s an exciting time to be into quantum!

Molecules?

It turns out there are still other systems we can consider, although because they are sufficiently more “out there” at the moment, I should first say “bear with me!” as I explain. Forget about atoms, photons in a box, and really cold LC circuits. Instead, consider a rigid 3-dimensional object whose center of mass has been pinned in such a way that the object can rotate any way it wants. Now, “quantize” it! In other words, consider the possibility of having quantum superpositions of different orientations of this object. Just like superpositions of a dead and alive cat, of a photon and no photon, the object can be in quantum superposition of oriented up, sideways, and down, for example. Superpositions of all possible orientations then make up our new configuration space (read: playground), and we are lucky that it too inherits many of the properties we know and love from its multi-qubit and oscillator cousins.

Examples of rigid bodies include airplanes (which can roll, pitch and yaw, even while “fixed” on a particular trajectory vector) and robot arms (which can rotate about multiple joints). Given that we’re not quantizing those (yet?), what rigid body should we have in mind as a serious candidate? Well, in parallel to the impressive engineering successes of the multi-qubit and oscillator paradigms, physicists and chemists have made substantial progress in trapping and cooling molecules. If a trapped molecule is cold enough, it’s vibrational and electronic states can be neglected, and its rotational states form exactly the rigid body we are interested in. Such rotational states, as far as we can tell, are not in the realm of Avengers-style science fiction.

Superpositions of molecular orientations don’t violate the Deutsch proposition.

The idea to use molecules for quantum computing dates all the way back to a 2001 paper by Dave DeMille, but in a recent paper by Jacob Covey, John Preskill, and myself, we propose a framework of how to utilize the large space of molecular orientations to protect against (you guessed it!) a type of local noise. In the second part of the story, called “Quantum Error Correction with Molecules“, I will cover a particular concept that is not only useful for a proper error-correcting code (classical and quantum), but also one that is quite fun to try and understand. The concept is based on a certain kind of tiling, called Voronoi tiles or Thiessen polygons, which can be used to tile anything from your bathroom floor to the space of molecular orientations. Stay tuned!

Putting back the pieces of a broken hologram

It is Monday afternoon and the day seems to be a productive one, if not yet quite memorable. As I revise some notes on my desk, Beni Yoshida walks into my office to remind me that the high-energy physics seminar is about to start. I hesitate, somewhat apprehensive of the near-certain frustration of being lost during the first few minutes of a talk in an unfamiliar field. I normally avoid such a situation, but in my email I find John’s forecast for an accessible talk by Daniel Harlow and a title with three words I can cling onto. “Quantum error correction” has driven my curiosity for the last seven years. The remaining acronyms in the title will become much more familiar in the four months to come.

Most of you are probably familiar with holograms, these shiny flat films representing a 3D object from essentially any desired angle. I find it quite remarkable how all the information of a 3D object can be printed on an essentially 2D film. True, the colors are not represented as faithfully as in a traditional photograph, but it looks as though we have taken a photograph from every possible angle! The speaker’s main message that day seemed even more provocative than the idea of holography itself. Even if the hologram is broken into pieces, and some of these are lost, we may still use the remaining pieces to recover parts of the 3D image or even the full thing given a sufficiently large portion of the hologram. The 3D object is not only recorded in 2D, it is recorded redundantly!

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Half way through Daniel’s exposition, Beni and I exchange a knowing glance. We recognize a familiar pattern from our latest project. A pattern which has gained the moniker of “cleaning lemma” within the quantum information community which can be thought of as a quantitative analog of reconstructing the 3D image from pieces of the hologram. Daniel makes connections using a language that we are familiar with. Beni and I discuss what we have understood and how to make it more concrete as we stride back through campus. We scribble diagrams on the whiteboard and string words such as tensor, encoder, MERA and negative curvature into our discussion. An image from the web gives us some intuition on the latter. We are onto something. We have a model. It is simple. It is new. It is exciting.

Poincare projection of a regular pentagon tiling of negatively curved space.

Poincare projection of a regular pentagon tiling of negatively curved space.

Food has not come our way so we head to my apartment as we enthusiastically continue our discussion. I can only provide two avocados and some leftover pasta but that is not important, we are sharing the joy of insight. We arrange a meeting with Daniel to present our progress. By Wednesday Beni and I introduce the holographic pentagon code at the group meeting. A core for a new project is already there, but we need some help to navigate the high-energy waters. Who better to guide us in such an endeavor than our mentor, John Preskill, who recognized the importance of quantum information in Holography as early as 1999 and has repeatedly proven himself a master of both trades.

“I feel that the idea of holography has a strong whiff of entanglement—for we have seen that in a profoundly entangled state the amount of information stored locally in the microscopic degrees of freedom can be far less than we would naively expect. For example, in the case of the quantum error-correcting codes, the encoded information may occupy a small ‘global’ subspace of a much larger Hilbert space. Similarly, the distinct topological phases of a fractional quantum Hall system look alike locally in the bulk, but have distinguishable edge states at the boundary.”
-J. Preskill, 1999

As Beni puts it, the time for using modern quantum information tools in high-energy physics has come. By this he means quantum error correction and maybe tensor networks. First privately, then more openly, we continue to sharpen and shape our project. Through conferences, Skype calls and emails, we further our discussion and progressively shape ideas. Many speculations mature to conjectures and fall victim to counterexamples. Some stand the test of simulations or are even promoted to theorems by virtue of mathematical proofs.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

I publicly present the project for the first time at a select quantum information conference in Australia. Two months later, after a particularly intense writing, revising and editing process, the article is almost complete. As we finalize the text and relabel the figures, Daniel and Beni unveil our work to quantum entanglement experts in Puerto Rico. The talks are a hit and it is time to let all our peers read about it.

You are invited to do so and Beni will even be serving a reader’s guide in an upcoming post.