As an undergraduate, I took Introduction to Algorithms from Ron Rivest. One of the topics he taught was the RSA public-key cryptosystem which he had created with Adi Shamir and Leonard Adleman. At the time, RSA was only about a decade old, yet already one of the banner creations of computer science. Today many of us rely on it routinely for the security of banking transactions. The internet would not be the same without it and its successors (such as elliptic curve cryptography, ECC). However, as you may have heard, quantum computation spells change for cryptography. Today I’ll tell a little of this story and talk about prospects for the future.

What is public-key cryptography (PKC)? The basic notion is due to Ralph Merkle in 1974 and (in a stronger form) to Whitfield Diffie and Martin Hellman in 1976. Their remarkable proposal was that two parties, “Alice” and “Bob”, could cooperate in cryptographic protocols, even if they had never met before. All prior cryptography, from the ancients up through and after the cryptographic adventures of WWII, had relied on the cooperating parties sharing in advance some “secret key” that gave them an edge over any eavesdropper “Eve”.

How can the Merkle-Diffie-Hellman idea be possible? Alice and Bob wish to share a secret, but they have never met or communicated previously, and Eve can read every message they send each other. How, then, can they learn anything from the communication that she does not?

The crux is computational complexity. It is true that Alice and Bob do not learn any more information than Eve does. But, because they get to “frame” the message exchange, they learn the information in a readily-digestible form, whereas Eve learns the information in a form that requires, apparently, exponential time to be converted to useful information. By then the information will be too stale to be useful, or the sun will have gone nova, whichever comes first.

For their elegant cryptosystem carrying out this idea, Rivest, Shamir and Adleman shared the Turing Award in 2002. This is fitting, because Turing was a leader in the very successful cryptanalytic efforts in Great Britain during WWII. It is sad that (among other things) Turing did not live to see, and maybe be part of, these modern developments in cryptography—born in 1912, he should still have been vigorous and active when PKC came on the scene.

It is not an accident that PKC was invented in the first few years after computational complexity was born as a field and it was conjectured that “P is not equal to NP”. Public-key cryptography relies not only on this, the most famous conjecture in computer science, but on even stronger conjectures. Most of these seem very solid—for forty or more years now, mathematicians and computer scientists and others have worked hard at developing algorithms of all sorts, and progress on any of a multitude of fronts would have been a viable threat to the conjectures—with nothing to show for it. That is not enough to make the conjectures true but (just as Laplace calculated odds that the sun will rise tomorrow), a reasonable basis for planning.

Not all conjectures live quietly to old age, however. The security of the RSA cryptosystem relies on the very particular conjecture that it is hard to factor large numbers. This conjecture received a rude, but exciting, surprise in 1994.

But let’s first roll back to 1981. That was when Richard Feynman—one of our favorite figures here at Caltech—first contemplated that a computer taking advantage of quantum mechanical effects might have significant computational advantages over a computer taking advantage only of classical mechanics. Feynman had won the Nobel Prize way back in 1965 for work on completely different topics, but this late-career one-off article is one of his greatest insights. (In case you don’t follow these things, the Nobel Prize is a well-known award that is given for spectacular work in fields that are not eligible for the Turing Award.)

In 1993, Umesh Vazirani and Ethan Bernstein read Feynman’s paper, understood the implications for complexity theory, and kicked off the field of quantum computation. Within a year, Peter Shor had shown that quantum computers can factor numbers efficiently.

Well, there goes the RSA cryptosystem! (And other popular ones, especially ECC.) If, of course, you can build quantum computers. Which we still can’t. But my physicist colleagues, both here at the IQIM and around the world, assure me that they’re working on it, “day and night, honest”. Meanwhile most of us, living on the edge, still use RSA and ECC, but we have to prepare for a transition.

Is there a cryptosystem we can use if and when someone does build a quantum computer? The first machine might be built in secrecy, so we have to be ready to transition to new methods whenever the lab work starts looking too promising.

Fortunately, yes, there are several promising candidates for what is fashionably called “post-quantum cryptography”. I’ll give a little list here. In all of these methods there is a parameter “” that measures the size of the messages or other information being manipulated. In order that a cryptosystem merit the title “post-quantum”, I’ll require that it maintain the following two properties as grows:

(a) The processing time required by the honest parties (Alice and Bob in our little story) is no more than for some finite constant . The honest parties only need to use a classical computer, such as those we use today.

(b) The processing time required by the attacker (Eve) is no less than for some positive constant , even if she has access to a full-strength -qubit quantum computer.

Ultimately we care about the values of and also other parameters, but today let’s keep it simple. Here is a list of some candidates, with a few notes on each.

(1) The McEliece cryptosystem (and variants), due to Caltech’s very own Bob McEliece in 1978. This is the first “code-based” cryptosystem: it relies on the hardness of decoding random linear error-correcting codes. Interestingly, a weakened version of the method was compromised in the late 90’s by an algorithm of Sendrier. But the full-strength cryptosystem is still secure against all known attacks.

(2) Lattice cryptosystems: these come in various flavors, due to works by Ajtai and Dwork, Regev and others. They rely on the hardness of basis reduction and related problems in random—and even adversarially chosen!—lattices. In some ways these cryptosystems are the most likely replacements for RSA/ECC. However, we have a nontrivial quantum algorithm, due to Kuperberg, that is fairly good at solving a problem that is, in turn, uncomfortably similar to cracking these cryptosystems. I’m glossing over key distinctions—all I want to say, here, is that it will take some time before we can gain conviction whether these distinctions are sufficient to ensure security.

(3) Multivariate quadratic (MQ) cryptosystems such as HFEv-, based on a method introduced by Patarin. The security of these systems relies on the hardness of solving systems of polynomial (but nonlinear) equations. This is a problem that we have absolutely no quantum-over-classical edge on, which is a great research topic.

(4) Since I started thinking about this subject I came up with another candidate. Cryptanalysts haven’t had much chance to look at it yet, so it hasn’t yet acquired any credibility; time will tell. Also, it is slow, but this may improve if a certain open question in coding theory can be solved. It is an MQ cryptosystem and also code-based, and it seems to be more secure than McEliece’s system against the main kinds of attack.

So the situation is encouraging, in that we have diverse options for the future. What this list cannot reveal, though, is how much trust we should place in the security of these proposals. Computer science today has no techniques to prove in absolute terms the security of any PKC. Even the best security proofs, such as for lattice methods, say merely that “cracking system A is at least as hard as solving some other problem B” where B is, a priori, more general or more adversarial. So trust in a cryptosystem is, like trust in people, mainly a function of the intensity and duration of our acquaintance; except that in the case of cryptosystems, we measure this acquaintance by how many people, and how smart and how dedicated and over how many years, have tried to attack the system. Come to think of it, trust in cryptosystems is simply trust in people.

And trust in post-quantum cryptosystems is trust in the people who study quantum algorithms. There are rather few of that species, and they have not been at work for very long. So we don’t understand quantum algorithms very well. Which means that we cannot yet put much trust in the post-quantum security of any cryptosystem.

I want to wrap up but I cannot resist filling you in a little on Alice and Bob. Remember Alice and Bob?—all of our efforts are dedicated to protecting their communications. Who are they? What have they to hide? I don’t know them, but fortunately, I can refer you to an unauthorized but definitive biography.

Pingback: Switching to /dev/null for web scalability,Fermi Paradox, and check in with Quantum Tech | Pink Iguana

Reblogged this on Subhayan Roy Moulick and commented:

I wanted to write something on post Quantum Cryptography, but I found this blog article, which very much captures the basics. Maybe I’ll blog about Lattice based crypto systems in the near future. 🙂