How should a visitor to Zürich spend her weekend?

Launch this question at a Swiss lunchtable, and you split diners into two camps. To take advantage of Zürich, some say, visit Geneva, Lucerne, or another spot outside Zürich. Other locals suggest museums, the lake, and the 19^{th}-century ETH building.

ETH, short for a German name I’ve never pronounced, is the polytechnic from which Einstein graduated. The polytechnic houses a quantum-information (QI) theory group that’s pioneering ideas I’ve blogged about: single-shot information, epsilonification, and small-scale thermodynamics. While visiting the group this August, I triggered an avalanche of tourism advice. Caught between two camps, I chose Option Three: Contemplate polar codes.

Polar codes compress information into the smallest space possible. Imagine you write a message (say, a Zürich travel guide) and want to encode it in the fewest possible symbols (so it fits in my camera bag). The longer the message, the fewer encoding symbols you need per encoded symbol: The more punch each code letter can pack. As the message grows, the encoding-to-encoded ratio decreases. The lowest possible ratio is a number, represented by *H*, called the *Shannon entropy*.

So established Claude E. Shannon in 1948. But Shannon didn’t know how to code at efficiency *H*. Not for 51 years did we know.

I learned how, just before that weekend. ETH student David Sutter walked me through polar codes as though down Zürich’s *Banhofstrasse*.

Say you’re encoding *n* copies of a random variable. When I say, “random variable,” think, “character in the travel guide.” Just as each character is one of 26 letters, each variable has one of many possible values.

Suppose the variables are* independent* *and* *identically distributed*. Even if you know some variables’ values, you can’t guess others’. Cryptoquote players might object that we can infer unknown from known letters. For example, a three-letter word that begins with “th” likely ends with “e.” But our message lacks patterns.

Think of the variables as diners at my lunchtable. Asking how to fill a weekend in Zürich—splitting the diners—I resembled the *polarizer*.

The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film and acts on the variables. Just as some diners pointed me outside Zürich, the polarizer gives some variables one property. Just some diners pointed me to within Zürich, the polarizer gives some variables another property. Just as I pointed myself at polar codes, the polarizer gives some variables a third property.

These properties involve *entropy*. Entropy quantifies uncertainty about a variable’s value—about which of the 26 letters a character represents. Even if you know the early variables’ values, you can’t guess the later variables’. But we can guess some polarized variables’ values. Call the first polarized variable *u** _{1}*, the second

*u*

*, etc. If we can guess the value of some*

_{2}*u*

*, that*

_{i}*u*

*has*

_{i}*low entropy*. If we can’t guess the value,

*u*

*has*

_{i}*high entropy*. The Nicole-esque variables have entropies like the earnings of

*Terminator Salvation*: noteworthy but not chart-topping.

To recap: We want to squeeze a message into the tiniest space possible. Even if we know early variables’ values, we can’t infer later variables’. Applying the polarizer, we split the variables into low-, high-, and middling-entropy flocks. We can guess the value of each low-entropy *u** _{i}*, if we know the foregoing

*u*

*’s.*

_{h}Almost finished!

In your camera-size travel guide, transcribe the high-entropy *u** _{i}*’s

*.*These

*u*

*’s suggest the values of the low-entropy*

_{i}*u*

*’s. When you want to decode the guide, guess the low-entropy*

_{i}*u*

*’s. Then reverse the polarizer to reconstruct much of the original text.*

_{i}The longer the original travel guide, the fewer errors you make while decoding, and the smaller the ratio of the encoded guide’s length to the original guide’s length. That ratio shrinks–as the guide’s length grows–to *H*. You’ve compressed a message maximally efficiently. As the Swiss say: *Glückwünsche.*

How does compression relate to QI? Quantum states form messages. Polar codes, ETH scientists have shown, compress quantum messages maximally efficiently. Researchers are exploring decoding strategies and relationships among (quantum) polar codes. With their help, Shannon-coded travel guides might fit not only in my camera bag, but also on the tip of my water bottle.

Should you need a Zürich travel guide, I recommend Grossmünster Church. Not only does the name fulfill your daily dose of umlauts. Not only did Ulrich Zwingli channel the Protestant Reformation into Switzerland there. Climbing a church tower affords a panorama of Zürich. After oohing over the hills and ahhing over the lake, you can shift your gaze toward ETH. The worldview being built there bewitches as much as the vista from any tower.

*With gratitude to ETH’s QI-theory group (particularly to Renato Renner) for its hospitality. And for its travel advice. With gratitude to David Sutter for his explanations and patience.*

I might point readers of this blog to some other relevant works on quantum polar codes:

http://arxiv.org/find/quant-ph/1/AND+au:+wilde+ti:+polar/0/1/0/all/0/1

🙂

Thanks for the information, Mark.

And there you have it, folks — from someone who knows loads more about polar codes than I!

The single line, “The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film …” made my entire day! Thanks for offering a helping hand to an English teacher bedazzled by what you are doing (and writing!).

Now *your* comment has made *my* day. 🙂

Pingback: Reading the sub(linear )text | Quantum Frontiers

Great post.