Holography and the MERA

The AdS/MERA correspondence has been making the rounds of the blogosphere with nice posts by Scott Aaronson and Sean Carroll, so let’s take a look at the topic here at Quantum Frontiers.

The question of how to formulate a quantum theory of gravity is a long-standing open problem in theoretical physics. Somewhat recently, an idea that has gained a lot of traction (and that Spiros has blogged about before) is emergence. This is the idea that space and time may emerge from some more fine-grained quantum objects and their interactions. If we could understand how classical spacetime emerges from an underlying quantum system, then it’s not too much of a stretch to hope that this understanding would give us insight into the full quantum nature of spacetime.

One type of emergence is exhibited in holography, which is the idea that certain (D+1)-dimensional systems with gravity are exactly equivalent to D-dimensional quantum theories without gravity. (Note that we’re calling time a dimension here. For example, you would say that on a day-to-day basis we experience D = 4 dimensions.) In this case, that extra +1 dimension and the concomitant gravitational dynamics are emergent phenomena.

A nice aspect of holography is that it is explicitly realized by the AdS/CFT correspondence. This correspondence proposes that a particular class of spacetimes—ones that asymptotically look like anti-de Sitter space, or AdS—are equivalent to states of a particular type of quantum system—a conformal field theory, or CFT. A convenient visualization is to draw the AdS spacetime as a cylinder, where time marches forward as you move up the cylinder and different slices of the cylinder correspond to snapshots of space at different instants of time. Conveniently, in this picture you can think of the corresponding CFT as living on the boundary of the cylinder, which, you should note, has one less dimension than the “bulk” inside the cylinder.

board

Even within this nice picture of holography that we get from the AdS/CFT correspondence, there is a question of how exactly do CFT, or boundary quantities map onto quantities in the AdS bulk. This is where a certain tool from quantum information theory called tensor networks has recently shown a lot of promise.

A tensor network is a way to efficiently represent certain states of a quantum system. Moreover, they have nice graphical representations which look something like this:

 mera

Beni discussed one type of tensor network in his post on holographic codes. In this post, let’s discuss the tensor network shown above, which is known as the Multiscale Entanglement Renormalization Ansatz, or MERA.

The MERA was initially developed by Guifre Vidal and Glen Evenbly as an efficient approximation to the ground state of a CFT. Roughly speaking, in the picture of a MERA above, one starts with a simple state at the centre, and as you move outward through the network, the MERA tells you how to build up a CFT state which lives on the legs at the boundary. The MERA caught the eye of Brian Swingle, who noticed that it looks an awfully lot like a discretization of a slice of the AdS cylinder shown above. As such, it wasn’t a preposterously big leap to suggest a possible “AdS/MERA correspondence.” Namely, perhaps it’s more than a simple coincidence that a MERA both encodes a CFT state and resembles a slice of AdS. Perhaps the MERA gives us the tools that are required to construct a map between the boundary and the bulk!

So, how seriously should one take the possibility of an AdS/MERA correspondence? That’s the question that my colleagues and I addressed in a recent paper. Essentially, there are several properties that a consistent holographic theory should satisfy in both the bulk and the boundary. We asked whether these properties are still simultaneously satisfied in a correspondence where the bulk and boundary are related by a MERA.

What we found was that you invariably run into inconsistencies between bulk and boundary physics, at least in the simplest construals of what an AdS/MERA correspondence might be. This doesn’t mean that there is no hope for an AdS/MERA correspondence. Rather, it says that the simplest approach will not work. For a good correspondence, you would need to augment the MERA with some additional structure, or perhaps consider different tensor networks altogether. For instance, the holographic code features a tensor network which hints at a possible bulk/boundary correspondence, and the consistency conditions that we proposed are a good list of checks for Beni and company as they work out the extent to which the code can describe holographic CFTs. Indeed, a good way to summarize how our work fits into the picture of quantum gravity alongside holography and tensors networks is by saying that it’s nice to have good signposts on the road when you don’t have a map.

Mingling stat mech with quantum info in Maryland

I felt like a yoyo.

I was standing in a hallway at the University of Maryland. On one side stood quantum-information theorists. On the other side stood statistical-mechanics scientists.* The groups eyed each other, like Jets and Sharks in West Side Story, except without fighting or dancing.

This March, the groups were generous enough to host me for a visit. I parked first at QuICS, the Joint Center for Quantum Information and Computer Science. Established in October 2014, QuICS had moved into renovated offices the previous month. QuICSland boasts bright colors, sprawling armchairs, and the scent of novelty. So recently had QuICS arrived that the restroom had not acquired toilet paper (as I learned later than I’d have preferred).

Interaction space

Photo credit: QuICS

From QuICS, I yoyo-ed to the chemistry building, where Chris Jarzynski’s group studies fluctuation relations. Fluctuation relations, introduced elsewhere on this blog, describe out-of-equilibrium systems. A system is out of equilibrium if large-scale properties of it change. Many systems operate out of equilibrium—boiling soup, combustion engines, hurricanes, and living creatures, for instance. Physicists want to describe nonequilibrium processes but have trouble: Living creatures are complicated. Hence the buzz about fluctuation relations.

My first Friday in Maryland, I presented a seminar about quantum voting for QuICS. The next Tuesday, I was to present about one-shot information theory for stat-mech enthusiasts. Each week, the stat-mech crowd invites its speaker to lunch. Chris Jarzynski recommended I invite QuICS. Hence the Jets-and-Sharks tableau.

“Have you interacted before?” I asked the hallway.

“No,” said a voice. QuICS hadn’t existed till last fall, and some QuICSers hadn’t had offices till the previous month.**

Silence.

“We’re QuICS,” volunteered Stephen Jordan, a quantum-computation theorist, “the Joint Center for Quantum Information and Computer Science.”

So began the mingling. It continued at lunch, which we shared at three circular tables we’d dragged into a chain. The mingling continued during the seminar, as QuICSers sat with chemists, materials scientists, and control theorists. The mingling continued the next day, when QuICSer Alexey Gorshkov joined my discussion with the Jarzynski group. Back and forth we yoyo-ed, between buildings and topics.

“Mingled,” said Yigit Subasi. Yigit, a postdoc of Chris’s, specialized in quantum physics as a PhD student. I’d asked how he thinks about quantum fluctuation relations. Since Chris and colleagues ignited fluctuation-relation research, theorems have proliferated like vines in a jungle. Everyone and his aunty seems to have invented a fluctuation theorem. I canvassed Marylanders for bushwhacking tips.

Imagine, said Yigit, a system whose state you know. Imagine a gas, whose temperature you’ve measured, at equilibrium in a box. Or imagine a trapped ion. Begin with a state about which you have information.

Imagine performing work on the system “violently.” Compress the gas quickly, so the particles roil. Shine light on the ion. The system will leave equilibrium. “The information,” said Yigit, “gets mingled.”

Imagine halting the compression. Imagine switching off the light. Combine your information about the initial state with assumptions and physical laws.*** Manipulate equations in the right way, and the information might “unmingle.” You might capture properties of the violence in a fluctuation relation.

2 photos - cut

With Zhiyue Lu and Andrew Maven Smith of Chris Jarzynski’s group (left) and with QuICSers (right)

I’m grateful to have exchanged information in Maryland, to have yoyo-ed between groups. We have work to perform together. I have transformations to undergo.**** Let the unmingling begin.

With gratitude to Alexey Gorshkov and QuICS, and to Chris Jarzynski and the University of Maryland Department of Chemistry, for their hospitality, conversation, and camaraderie.

*Statistical mechanics is the study of systems that contain vast numbers of particles, like the air we breathe and white dwarf stars. I harp on about statistical mechanics often.

**Before QuICS’s birth, a future QuICSer had collaborated with a postdoc of Chris’s on combining quantum information with fluctuation relations.

***Yes, physical laws are assumptions. But they’re glorified assumptions.

****Hopefully nonviolent transformations.

Generally speaking

My high-school calculus teacher had a mustache like a walrus’s and shoulders like a rower’s. At 8:05 AM, he would demand my class’s questions about our homework. Students would yawn, and someone’s hand would drift into the air.

“I have a general question,” the hand’s owner would begin.

“Only private questions from you,” my teacher would snap. “You’ll be a general someday, but you’re not a colonel, or even a captain, yet.”

Then his eyes would twinkle; his voice would soften; and, after the student asked the question, his answer would epitomize why I’ve chosen a life in which I use calculus more often than laundry detergent.

http://www.sell-buy-machines.com/2013/02/why-prefer-second-hand-equipment-over-new.html

Many times though I witnessed the “general” trap, I fell into it once. Little wonder: I relish generalization as other people relish hiking or painting or Michelin-worthy relish. When inferring general principles from examples, I abstract away details as though they’re tomato stains. My veneration of generalization led me to quantum information (QI) theory. One abstract theory can model many physical systems: electrons, superconductors, ion traps, etc.

Little wonder that generalizing a QI model swallowed my summer.

QI has shed light on statistical mechanics and thermodynamics, which describe energy, information, and efficiency. Models called resource theories describe small systems’ energies, information, and efficiencies. Resource theories help us calculate a quantum system’s value—what you can and can’t create from a quantum system—if you can manipulate systems in only certain ways.

Suppose you can perform only operations that preserve energy. According to the Second Law of Thermodynamics, systems evolve toward equilibrium. Equilibrium amounts roughly to stasis: Averages of properties like energy remain constant.

Out-of-equilibrium systems have value because you can suck energy from them to power laundry machines. How much energy can you draw, on average, from a system in a constant-temperature environment? Technically: How much “work” can you draw? We denote this average work by < W >. According to thermodynamics, < W > equals the change ∆F in the system’s Helmholtz free energy. The Helmholtz free energy is a thermodynamic property similar to the energy stored in a coiled spring.

http://www.telegraph.co.uk/property/propertyadvice/jeffhowell/8013593/Home-improvements-Slime-does-come-out-in-the-wash.html

One reason to study thermodynamics?

Suppose you want to calculate more than the average extractable work. How much work will you probably extract during some particular trial? Though statistical physics offers no answer, resource theories do. One answer derived from resource theories resembles ∆F mathematically but involves one-shot information theory, which I’ve discussed elsewhere.

If you average this one-shot extractable work, you recover < W > = ∆F. “Helmholtz” resource theories recapitulate statistical-physics results while offering new insights about single trials.

Helmholtz resource theories sit atop a silver-tasseled pillow in my heart. Why not, I thought, spread the joy to the rest of statistical physics? Why not generalize thermodynamic resource theories?

The average work <W > extractable equals ∆F if heat can leak into your system. If heat and particles can leak, <W > equals the change in your system’s grand potential. The grand potential, like the Helmholtz free energy, is a free energy that resembles the energy in a coiled spring. The grand potential characterizes Bose-Einstein condensates, low-energy quantum systems that may have applications to metrology and quantum computation. If your system responds to a magnetic field, or has mass and occupies a gravitational field, or has other properties, <W > equals the change in another free energy.

A collaborator and I designed resource theories that describe heat-and-particle exchanges. In our paper “Beyond heat baths: Generalized resource theories for small-scale thermodynamics,” we propose that different thermodynamic resource theories correspond to different interactions, environments, and free energies. I detailed the proposal in “Beyond heat baths II: Framework for generalized thermodynamic resource theories.”

“II” generalizes enough to satisfy my craving for patterns and universals. “II” generalizes enough to merit a hand-slap of a pun from my calculus teacher. We can test abstract theories only by applying them to specific systems. If thermodynamic resource theories describe situations as diverse as heat-and-particle exchanges, magnetic fields, and polymers, some specific system should shed light on resource theories’ accuracy.

If you find such a system, let me know. Much as generalization pleases aesthetically, the detergent is in the details.

The Graphene Effect

Spyridon Michalakis, Eryn Walsh, Benjamin Fackrell, Jackie O'Sullivan

Lunch with Spiros, Eryn, and Jackie at the Athenaeum (left to right).

Sitting and eating lunch in the room where Einstein and many others of turbo charged, ultra-powered acumen sat and ate lunch excites me. So, I was thrilled when lunch was arranged for the teachers participating in IQIM’s Summer Research Internship at the famed Athenaeum on Caltech’s campus. Spyridon Michalakis (Spiros), Jackie O’Sullivan, Eryn Walsh and I were having lunch when I asked Spiros about one of the renowned “Millennium” problems in Mathematical Physics I heard he had solved. He told me about his 18 month epic journey (surely an extremely condensed version) to solve a problem pertaining to the Quantum Hall effect. Understandably, within this journey lied many trials and tribulations ranging from feelings of self loathing and pessimistic resignation to dealing with tragic disappointment that comes from the realization that a victory celebration was much ado about nothing because the solution wasn’t correct. An unveiling of your true humanity and the lengths one can push themselves to find a solution. Three points struck me from this conversation. First, there’s a necessity for a love of the pain that tends to accompany a dogged determinism for a solution. Secondly, the idea that a person’s humanity is exposed, at least to some degree, when accepting a challenge of this caliber and then refusing to accept failure with an almost supernatural steadfastness towards a solution. Lastly, the Quantum Hall effect. The first two on the list are ideas I often ponder as a teacher and student, and probably lends itself to more of a philosophical discussion, which I do find very interesting, however, will not be the focus of this posting.

The Yeh research group, which I gratefully have been allowed to join the last three summers, researches (among other things) different applications of graphene encompassing the growth of graphene, high efficiency graphene solar cells, graphene component fabrication and strain engineering of graphene where, coincidentally for the latter, the quantum Hall effect takes center stage. The quantum Hall effect now had my attention and I felt it necessary to learn something, anything, about this recently recurring topic. The quantum Hall effect is something I had put very little thought into and if you are like I was, you’ve heard about it, but surely couldn’t explain even the basics to someone. I now know something on the subject and, hopefully, after reading this post you too will know something about the very basics of both the classical and the quantum Hall effect, and maybe experience a spark of interest regarding graphene’s fascinating ability to display the quantum Hall effect in a magnetic field-free environment.

Let’s start at the beginning with the Hall effect. Edwin Herbert Hall discovered the appropriately named effect in 1879. The Hall element in the diagram is a flat piece of conducting metal with a longitudinal current running through. When a magnetic field is introduced normal to the Hall element the charge carriers moving through the Hall element experience a Lorentz force. If we think of the current as being conventionHallEffectal (direction flow of positively charged ions), then the electrons (negative charge carriers) are traveling in the opposite direction of the green arrow shown in the diagram. Referring to the diagram and using the right hand rule you can conclude a buildup of electrons at the long bottom edge of the Hall element running parallel to the longitudinal current, and an opposing positively charged edge at the long top edge of the Hall element. This separation of charge will produce a transverse potential difference and is labeled on the diagram as Hall voltage (VH). Once the electric force (acting towards the positively charged edge perpendicular to both current and magnetic field) from the charge build up balances with the Lorentz force (opposing the electric force), the result is a negative charge carrier with a straight line trajectory in the opposite direction of the green arrow. Essentially, Hall conductance is the longitudinal current divided by the Hall voltage.

Now, let’s take a look at the quantum Hall effect. On February 5th, 1980 Klaus von Klitzing was investigating the Hall effect, in particular, the Hall conductance of a two-dimensional electron gas plane (2DEG) at very low temperatures around 4 Kelvin (- 4520 Fahrenheit). von Klitzing found when a magnetic field is applied normal to the 2DEG, and Hall conductance is graphed as a function of magnetic field strength, a staircase looking graph emerges. The discovery that earned von Klitzing’s Nobel Prize in 1985 was as unexpected as it is intriguing. For each step in the staircase the value of the function was an integer multiple of e2/h, where e is the elementary charge and h is Planck’s constant. Since conductance is the reciprocal of resistance we can view this data as h/ie2. When i (integer that describes each plateau) equals one, h/ie2 is approximately 26,000 ohms and serves as a superior standard of electrical resistance used worldwide to maintain and compare the unit of resistance.

Before discussing where graphene and the quantum Hall effect cross paths, let’s examine some extraordinary characteristics of graphene. Graphene is truly an amazing material for many reasons. We’ll look at size and scale things up a bit for fun. Graphene is one carbon atom thick, that’s 0.345 nanometers (0.000000000345 meters). Envision a one square centimeter sized graphene sheet, which is now regularly grown. Imagine, somehow, we could thicken the monolayer graphene sheet equal to that of a piece of printer paper (0.1 mm) while appropriately scaling up the area coverage. The graphene sheet that originally covered only one square centimeter would now cover an area of about 2900 meters by 2900 meters or roughly 1.8 miles by 1.8 miles. A paper thin sheet covering about 4 square miles. The Royal Swedish Academy of Sciences at nobelprize.org has an interesting way of scaling the tiny up to every day experience. They want you to picture a one square meter hammock made of graphene suspending a 4 kg cat, which represents the maximum weight such a sheet of graphene could support. The hammock would be nearly invisible, would weigh as much as one of the cat’s whiskers, and incredibly, would possess the strength to keep the cat suspended. If it were possible to make the exact hammock out of steel, its maximum load would be less than 1/100 the weight of the cat. Graphene is more than 100 times stronger than the strongest steel!

Graphene sheets possess many fascinating characteristics certainly not limited to mere size and strength. Experiments are being conducted at Caltech to study the electrical properties of graphene when draped over a field of gold nanoparticles; a discipline appropriately termed “strain engineering.” The peaks and valleys that form create strain in the graphene sheet, changing its electrical properties. The greater the curvature of the graphene over the peaks, the greater the strain. The electrons in graphene in regions experiencing strain behave as if they are in a magnetic field despite the fact that they are not. The electrons in regions experiencing the greatest strain behave as they would in extremely strong magnetic fields exceeding 300 tesla. For some perspective, the largest magnetic field ever created has been near 100 tesla and it only lasted for a few milliseconds. Additionally, graphene sheets under strain experience conductance plateaus very similar to those observed in the quantum Hall effect. This allows for great control of electrical properties by simply deforming the graphene sheet, effectively changing the amount of strain. The pseudo-magnetic field generated at room temperature by mere deformation of graphene is an extremely promising and exotic property that is bound to make graphene a key component in a plethora of future technologies.

Graphene and its incredibly fascinating properties make it very difficult to think of an area of technology where it won’t have a huge impact once incorporated. Caltech is at the forefront in research and development for graphene component fabrication, as well as the many aspects involved in the growth of high quality graphene. This summer I was involved in the latter and contributed a bit in setting up an experimenKodak_Camera 1326t that will attempt to grow graphene in a unique way. My contribution included the set-up of the stepper motor (pictured to the right) and its controls, so that it would very slowly travel down the tube in an attempt to grow a long strip of graphene. If Caltech scientist David Boyd and graduate student Chen-Chih Hsu are able to grow the long strips of graphene, this will mark yet another landmark achievement for them and Caltech in graphene research, bringing all of us closer to technologies such as flexible electronics, synthetic nerve cells, 500-mile range Tesla cars and batteries that allow us to stream Netflix on smartphones for weeks on end.

Caltech InnoWorks 2014, More Than Just a Summer Camp

“More, we need more!”

Adding more fuel to “Red October”, I presented the final product to my teammates. With a communal nod of approval, we rushed over to the crowd.

“1, 2, 3, GO!”

It was the semi-finals. Teams Heil Hydra! and The Archimedean Hawks ignited their engines and set their vehicles onto the starting line. Nascar? F1? Nope, even better. Homemade steamboat races! Throughout the cheers and yelling, we discovered that more isn’t better. Flames were devouring Team Heil Hydra!’s Red October. Down went the ship. Despite the loss, the kids learned about steam as a source of energy, experimentation, and teamwork. Although it may have been hard to tell the first day, by the end of this fourth day of the camp, all students were visibly excited for another day of the InnoWorks summer program at Caltech.

photo (1)

What is InnoWorks? An engaging summer program aimed for middle school students with disadvantaged backgrounds, InnoWorks offers a free of charge opportunity to dig into the worlds of science, technology, engineering, mathematics, and medicine (STEM^2). In his own life experience, William Hwang (founder of InnoWorks) was blessed with the opportunities to attend several summer camps throughout his childhood, but he had a friend who did not share the same opportunities. Sparked with the desire to start something, Hwang founded the non-profit organization, United InnoWorks Academy. With the first program to begin in 2004, the InnoWorks Academy developed these summer programs to help provide underprivileged kids with hands-on activities, team-building activities, and fast-paced competitive missions. Starting with just 34 students and 17 volunteers in a single chapter, InnoWorks has now grown to more than a dozen university chapters that have hosted above 60 summer programs for 2,200 middle school students, all done with the help of over 1000 volunteers.

Monday, August 11th, 2014 marked the first day of Caltech’s 3rd annual summer InnoWorks program. Last year, my younger brother participated in the program and had such a great experience that he wanted to become a junior mentor this year. After researching the program and listening to my brother’s past experiences, I was ecstatic to accept this journey as a mentor for Caltech’s InnoWorks program. Allow me to take you on a ride through my team’s and my own experience of InnoWorks.

First Day of Caltech InnoWorks 2014. My first team member that I checked in was Elliot. “Are you ready for InnoWorks!?” Perhaps I was a little overly excited. I received a shrug and “What’re we eating for breakfast?” Not the response I was hoping for, but that was going to change. As the rest of my team, which included Frank, Megan, Ethan, and my junior mentor, Elan, arrived, I began peppering them with icebreakers left and right. Soon enough, we dubbed ourselves Heil Hydra! and by the end of the second day, I couldn’t get them to be quiet.

“What are we doing next?”

“Guys, GUYS! Let’s use the green pom-poms as chloroplasts.”

“Hey, the soap actually smells good.”

“Hm. If you add another rubber band, the cup won’t vibrate as much, and it makes a lower sound.”

Sometimes they would have endless questions, which was great! Isn’t that what science is all about?

InnoWorks-35

Most of the days during camp were themed with a specific subject, including biology, chemistry, physics, and engineering. Before each activity, both mentors and junior mentors gave a brief, prepared introduction to the science used during the experimentation. Here’s a quick synopsis of some of the activities and the students’ experiences:

Camera Obscura. After a short explanation of light, and how a lens works, we split the room up into 3 groups to build their very own camera obscura, which is an optical device that projects an image of its surroundings onto a screen (or in our case, the ground). Using a mirror, a magnifying glass, some PVC piping, and a black tarp, the kids constructed a camera obscura. I was impressed by how many students encumbered the heat of the black tarp and concrete all in the name of science.

Build Your Own Instrument. The title says all. I let my junior mentor, Elan, lead the group in this activity. Tasked with creating an instrument based on accurate pitch of 3 whole note tones, creativity, efficiency, and performance, the students went straight to work. Children have endless imaginations. Give kids PVC pipes, rubber bands, balloons, cups, and paper clips, and they’ll make everything! Working together, the groups created an instrument (often more than one) to present in front of everyone. Teams were required to explain how their instrument created sound (vibrations), and attempt to play “Mary Had a Little Lamb” (which most succeeded). I came across paperclip rain sticks, PVC didgeridoos, test tube pan flutes, red solo cup drums, and even PVC balloon catapults and rubber band ballistas!

photo

Liquid Nitrogen. One of the highlights of the camp was liquid nitrogen! We were very honored to have Glen Evenbly and Olivier Landon-Cardinal, IQIM postdocs, join us. After pouring the liquid into a bowl, Glen showed the kids how nitrogen gas enveloped the area. Liquid nitrogen’s efficiency as a coolant is limited by the fact that it boils immediately upon contact with a warmer object, surrounding the object with nitrogen gas on which the liquid surfs. This effect is known as the Leidenfrost effect, which applies to any liquid in contact with an object significantly hotter than its boiling point. 

InnoWorks-38InnoWorks-40 

However, liquid nitrogen is still extremely cold, and when roses were placed into the bowl with liquid nitrogen, the pedals froze right before everyone’s eyes.

Lego Mindstorms. The last activity of the camp was building a lego robot and programming it to track and follow a black tape trail using its light sensor. Since each of my team members had experience with these lego kits, they went to work right away. Two of my students worked on building the robot, while the other two retrieved the pieces. After awhile, they prompted each other to switch roles.

InnoWorks-3

Programming the robot was a struggle, but manipulating the code and watching the aftermath was all part of the experiment. After many attempted tries, the group was unable to accurately get the robot to follow the black line (some groups were successful!). However, without any outside help (including myself), Team Heil Hydra! programmed the robot to move and sing (can you guess?) “Mary Had a Little Lamb”. Teamwork for the win! Team spirit bloomed in my group – each day of camp my InnoWorkers agreed on a matching t-shirt color. As a mentor, I could not have been more proud.

InnoWorks-49

I know that I am not only speaking for myself when I say that the InnoWorks family, the students, and the program itself has burrowed its way into my heart. I have watched these students develop teamwork skills, enthusiasm for learning new things, and friendships. I have heard these students speak the minimal amount on their first day, only to find that their chatterboxes won’t stop the last day. To overlook InnoWorks as just a science camp where students come to learn about science is an understatement. InnoWorks is where students experience, engage, and conduct science, where they learn not just about science, but also about collaboration, leadership, and innovation. 

I must end on this last note: Heil Hydra! 

Editor’s Note: Ms. Rebekah Zhou is majoring in mathematics at CSU Fresno. In her spare time, she enjoys teaching piano and tutoring.

Reading the sub(linear) text

Physicists are not known for finesse. “Even if it cost us our funding,” I’ve heard a physicist declare, “we’d tell you what we think.” Little wonder I irked the porter who directed me toward central Cambridge.

The University of Cambridge consists of colleges as the US consists of states. Each college has a porter’s lodge, where visitors check in and students beg for help after locking their keys in their rooms. And where physicists ask for directions.

Last March, I ducked inside a porter’s lodge that bustled with deliveries. The woman behind the high wooden desk volunteered to help me, but I asked too many questions. By my fifth, her pointing at a map had devolved to jabbing.

Read the subtext, I told myself. Leave.

Or so I would have told myself, if not for that afternoon.

That afternoon, I’d visited Cambridge’s CMS, which merits every letter in “Centre for Mathematical Sciences.” Home to Isaac Newton’s intellectual offspring, the CMS consists of eight soaring, glass-walled, blue-topped pavilions. Their majesty walloped me as I turned off the road toward the gatehouse. So did the congratulatory letter from Queen Elizabeth II that decorated the route to the restroom.

P1040733

I visited Nilanjana Datta, an affiliated lecturer of Cambridge’s Faculty of Mathematics, and her student, Felix Leditzky. Nilanjana and Felix specialize in entropies and one-shot information theory. Entropies quantify uncertainties and efficiencies. Imagine compressing many copies of a message into the smallest possible number of bits (units of memory). How few bits can you use per copy? That number, we call the optimal compression rate. It shrinks as the number of copies compressed grows. As the number of copies approaches infinity, that compression rate drops toward a number called the message’s Shannon entropy. If the message is quantum, the compression rate approaches the von Neumann entropy.

Good luck squeezing infinitely many copies of a message onto a hard drive. How efficiently can we compress fewer copies? According to one-shot information theory, the answer involves entropies other than Shannon’s and von Neumann’s. In addition to describing data compression, entropies describe the charging of batteriesthe concentration of entanglementthe encrypting of messages, and other information-processing tasks.

Speaking of compressing messages: Suppose one-shot information theory posted status updates on Facebook. Suppose that that panel on your Facebook page’s right-hand side showed news weightier than celebrity marriages. The news feed might read, “TRENDING: One-shot information theory: Second-order asymptotics.”

Second-order asymptotics, I learned at the CMS, concerns how the optimal compression rate decays as the number of copies compressed grows. Imagine compressing a billion copies of a quantum message ρ. The number of bits needed about equals a billion times the von Neumann entropy HvN(ρ). Since a billion is less than infinity, 1,000,000,000 HvN(ρ) bits won’t suffice. Can we estimate the compression rate more precisely?

The question reminds me of gas stations’ hidden pennies. The last time I passed a station’s billboard, some number like $3.65 caught my eye. Each gallon cost about $3.65, just as each copy of ρ costs about HvN(ρ) bits. But a 9/10, writ small, followed the $3.65. If I’d budgeted $3.65 per gallon, I couldn’t have filled my tank. If you budget HvN(ρ) bits per copy of ρ, you can’t compress all your copies.

Suppose some station’s owner hatches a plan to promote business. If you buy one gallon, you pay $3.654. The more you purchase, the more the final digit drops from four. By cataloguing receipts, you calculate how a tank’s cost varies with the number of gallons, n. The cost equals $3.65 × n to a first approximation. To a second approximation, the cost might equal $3.65 × n + an, wherein a represents some number of cents. Compute a, and you’ll have computed the gas’s second-order asymptotics.

Nilanjana and Felix computed a’s associated with data compression and other quantum tasks. Second-order asymptotics met information theory when Strassen combined them in nonquantum problems. These problems developed under attention from Hayashi, Han, Polyanski, Poor, Verdu, and others. Tomamichel and Hayashi, as well as Li, introduced quantumness.

In the total-cost expression, $3.65 × n depends on n directly, or “linearly.” The second term depends on √n. As the number of gallons grows, so does √n, but √n grows more slowly than n. The second term is called “sublinear.”

Which is the word that rose to mind in the porter’s lodge. I told myself, Read the sublinear text.

Little wonder I irked the porter. At least—thanks to quantum information, my mistake, and facial expressions’ contagiousness—she smiled.

 

 

With thanks to Nilanjana Datta and Felix Leditzky for explanations and references; to Nilanjana, Felix, and Cambridge’s Centre for Mathematical Sciences for their hospitality; and to porters everywhere for providing directions.

“Feveral kinds of hairy mouldy fpots”

The book had a sheepskin cover, and mold was growing on the sheepskin. Robert Hooke, a pioneering microbiologist, slid the cover under one of the world’s first microscopes. Mold, he discovered, consists of “nothing elfe but feveral kinds of fmall and varioufly figur’d Mufhroms.” He described the Mufhroms in his treatise Micrographia, a 1665 copy of which I found in “Beautiful Science.” An exhibition at San Marino’s Huntington Library, “Beautiful Science” showcases the physics of rainbows, the stars that enthralled Galileo, and the world visible through microscopes.

Hooke image copy

Beautiful science of yesterday: An illustration, from Hooke’s Micrographia, of the mold.

“[T]hrough a good Microfcope,” Hooke wrote, the sheepskin’s spots appeared “to be a very pretty fhap’d Vegetative body.”

How like a scientist, to think mold pretty. How like quantum noise, I thought, Hooke’s mold sounds.

Quantum noise hampers systems that transmit and detect light. To phone a friend or send an email—“Happy birthday, Sarah!” or “Quantum Frontiers has released an article”—we encode our message in light. The light traverses a fiber, buried in the ground, then hits a detector. The detector channels the light’s energy into a current, a stream of electrons that flows down a wire. The variations in the current’s strength is translated into Sarah’s birthday wish.

If noise doesn’t corrupt the signal. From encoding “Happy birthday,” the light and electrons might come to encode “Hsappi birthdeay.” Quantum noise arises because light consists of packets of energy, called “photons.” The sender can’t control how many photons hit the detector.

To send the letter H, we send about 108 photons.* Imagine sending fifty H’s. When we send the first, our signal might contain 108– 153 photons; when we send the second, 108 + 2,083; when we send the third, 108 – 6; and so on. Receiving different numbers of photons, the detector generates different amounts of current. Different amounts of current can translate into different symbols. From H, our message can morph into G.

This spring, I studied quantum noise under the guidance of IQIM faculty member Kerry Vahala. I learned to model quantum noise, to quantify it, when to worry about it, and when not. From quantum noise, we branched into Johnson noise (caused by interactions between the wire and its hot environment); amplified-spontaneous-emission, or ASE, noise (caused by photons belched by ions in the fiber); beat noise (ASE noise breeds with the light we sent, spawning new noise); and excess noise (the “miscellaneous” folder in the filing cabinet of noise types).

Vahala image copy

Beautiful science of today: A microreso-nator—a tiny pendulum-like device— studied by the Vahala group.

Noise, I learned, has structure. It exhibits patterns. It has personalities. I relished studying those patterns as I relish sending birthday greetings while battling noise. Noise types, I see as a string of pearls unearthed in a junkyard. I see them as “pretty fhap[es]” in Hooke’s treatise. I see them—to pay a greater compliment—as “hairy mouldy fpots.”

P1040754

*Optical-communications ballpark estimates:

  • Optical power: 1 mW = 10-3 J/s
  • Photon frequency: 200 THz = 2 × 1014 Hz
  • Photon energy: h𝜈 = (6.626 × 10-34 J . s)(2 × 1014 Hz) = 10-19 J
  • Bit rate: 1 GB = 109 bits/s
  • Number of bits per H: 10
  • Number of photons per H: (1 photon / 10-19 J) (10-3 J/s)(1 s / 109 bits)(10 bits / 1 H) = 108

 

An excerpt from this post was published today on Verso, the blog of the Huntington Library, Art Collection, and Botanical Gardens.

With thanks to Bassam Helou, Dan Lewis, Matt Stevens, and Kerry Vahala for feedback. With thanks to the Huntington Library (including Catherine Wehrey) and the Vahala group for the Micrographia image and the microresonator image, respectively.