Ken Wilson passed away on June 15 at age 77. He changed how we think about physics.
Renormalization theory, first formulated systematically by Freeman Dyson in 1949, cured the flaws of quantum electrodynamics and turned it into a precise computational tool. But the subject seemed magical and mysterious. Many physicists, Dirac prominently among them, questioned whether renormalization rests on a sound foundation.
Wilson changed that.
The renormalization group concept arose in an extraordinary paper by Gell-Mann and Low in 1954. It was embraced by Soviet physicists like Bogoliubov and Landau, and invoked by Landau to challenge the consistency of quantum electrodynamics. But it was an abstruse and inaccessible topic, as is well illustrated by the baffling discussion at the very end of the two-volume textbook by Bjorken and Drell.
Wilson changed that, too.
Ken Wilson turned renormalization upside down. Dyson and others had worried about the “ultraviolet divergences” occurring in Feynman diagrams. They introduced an artificial cutoff on integrations over the momenta of virtual particles, then tried to show that all the dependence on the cutoff can be eliminated by expressing the results of computations in terms of experimentally accessible quantities. It required great combinatoric agility to show this trick works in electrodynamics. In other theories, notably including general relativity, it doesn’t work.
Wilson adopted an alternative viewpoint. Take the short-distance cutoff seriously, he said, regarding it as part of the physical formulation of the field theory. Now ask what physics looks like at distances much larger than the cutoff. Wilson imagined letting the short-distance cutoff grow, while simultaneously adjusting the theory to preserve its low-energy predictions. This procedure sounds complicated, but Wilson discovered something wonderful — for the purpose of computing low-energy processes the theory becomes remarkably simple, completely characterized by just a few (renormalized) parameters. One recovers Dyson’s results plus much more, while also acquiring a rich and visually arresting physical picture of what is going on.
When I started graduate school in 1975, Wilson, not yet 40, was already a legend. Even Sidney Coleman, for me the paragon of razor sharp intellect, seemed to regard Wilson with awe. (They had been contemporaries at Caltech, both students of Murray Gell-Mann.) It enhanced the legend that Wilson had been notoriously slow to publish. He spent years pondering the foundations of quantum field theory before finally unleashing a torrent of revolutionary papers in the early 70s. Cornell had the wisdom to grant tenure despite Wilson’s unusually low productivity during the 60s.
As a student, I spent countless hours struggling through Wilson’s great papers, some of which were quite difficult. One introduced me to the operator product expansion, which became a workhorse of high-energy scattering theory and the foundation of conformal field theory. Another considered all the possible ways that renormalization group fixed points could control the high-energy behavior of the strong interactions. Conspicuously missing from the discussion was what turned out to be the correct idea — asymptotic freedom. Wilson had not overlooked this possibility; instead he “proved” it to be impossible. The proof contains a subtle error. Wilson analyzed charge renormalization invoking both Lorentz covariance and positivity of the Hilbert space metric, forgetting that gauge theories admit no gauge choice with both properties. Even Ken Wilson made mistakes.
Wilson also formulated the strong-coupling expansion of lattice gauge theory, and soon after pioneered the Euclidean Monte Carlo method for computing the quantitative non-perturbative predictions of quantum chromodynamics, which remains today an extremely active and successful program. But of the papers by Wilson I read while in graduate school, the most exciting by far was this one about the renormalization group. Toward the end of the paper Wilson discussed how to formulate the notion of the “continuum limit” of a field theory with a cutoff. Removing the short-distance cutoff is equivalent to taking the limit in which the correlation length (the inverse of the renormalized mass) is infinitely long compared to the cutoff — the continuum limit is a second-order phase transition. Wilson had finally found the right answer to the decades-old question, “What is quantum field theory?” And after reading his paper, I knew the answer, too! This Wilsonian viewpoint led to further deep insights mentioned in the paper, for example that an interacting self-coupled scalar field theory is unlikely to exist (i.e. have a continuum limit) in four spacetime dimensions.
Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”
Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?
Wilson’s great legacy is that we now regard nearly every quantum field theory as an effective field theory. We don’t demand or expect that the theory will continue working at arbitrarily short distances. At some stage it will break down and be replaced by a more fundamental description. This viewpoint is now so deeply ingrained in how we do physics that today’s students may be surprised to hear it was not always so. More than anyone else, we have Ken Wilson to thank for this indispensable wisdom. Few ideas have changed physics so much.
Pingback: Effective « Asymptotia
We don’t demand or expect that the theory will continue working at arbitrarily short distances. <– Is this also true for QCD?
Or at least the “expect” part. I would think since QCD is asymptotic free one could expect it will work for arbitrary distances, no?
Yes, mathematically, since QCD is asymptotically free, we expect to be able to reach the continuum limit as the bare coupling approaches zero (though this has never been proven rigorously). Physically, we don’t demand that the theory apply at arbitrarily short distances compared to the confinement scale, as we anticipate that new physics kicks in somewhere, maybe the Planck scale.
Pingback: Kenneth Wilson 1936-2013 | Not Even Wrong
Wilson later on (see http://arxiv.org/abs/hep-lat/0412043v2 pages 10-11) described his naturality argument against elementary scalars as a “blunder”.
” The claim was that it would be unnatural for such particles to have masses small enough to be detectable soon. But this claim makes no sense when one becomes familiar with the history of physics.”
Thanks, Peter. I had not seen that paper before and it’s very interesting!
“Ken Wilson turned renormalization upside down.” It was a bad idea. It has frozen the wrongness of the theory. Dirac was right: our interaction is wrong technically and conceptually; that is why we need to “repair” the corresponding results. I demonstrated it on a toy model here: http://arxiv.org/abs/1110.3702
I am not a Wilsonian. I am myself.
“equivalent to taking the limit in which the correlation length… is infinitely long compared to the correlation length”. Typo, presumably?
Thanks. Fixed it.
Pingback: Kenneth Geddes Wilson (June 8, 1936 – June 15, 2013) was an American theoretical… | Die wunderbare Welt von Isotopp
Ken was my thesis advisor at the time, and so I asked him where he’d written down the naturalness idea. He replied with a smile, “Oh, that was in my paper about all possible theories of the strong interaction, … except the correct one” and pointed me to the relevant paragraph in his 1971 article “Renormalization Group and Strong Interactions” (discussed above) where he mentions the problem of quadratic mass renormalizations for scalar fields.
In my thesis, I recalled his description of setting a 1 GeV scalar mass scale at the Planck scale: Imagine throwing darts at some cosmic dartboard … you’re just not going to be able to hit with 1 part in 10^38 precision.
At Cornell, we’d only learned a week earlier from his wife Alison that he hadn’t been doing well for the past year, and then over the weekend “Ken died last evening. He always liked to do things quietly and without fuss, and that’s how he left us.”
Thanks. I had not remembered that this 1971 paper states the naturalness issue clearly (on page 1840). Wilson remarks: “It is interesting to note that there are no weakly coupled scalar particles in nature; scalar particles are the only kind of free particles whose mass term does not break either an internal or a gauge symmetry.”
Right, for him a dimension 2 operator in 4d automatically meant a ratio of scales squared, hence the (cutoff/scale)^2 above.
Besides his work on the renormalization group, the relation between statistical mechanical systems and quantum field theory, etc., I also mentioned to reporters a few of his other pursuits in computing and networks. After inventing lattice gauge theory in 1974, he didn’t have adequate computing power for the numerical work he wanted. Even with Moore’s law, he estimated it would be too long a wait for adequate single processors, so he pushed for easy ways to use large numbers in parallel (this was the late 70s) for a variety of naturally parallelizable problems. He bought his own Floating Point Systems parallel processor array, and since there weren’t yet parallelizing compilers he would optimize the code directly in assembly language (as described in the article by Shenker and Tobochnik http://prb.aps.org/abstract/PRB/v22/i9/p4462_1 — I remember seeing them sitting in the grass discussing). These ideas anticipated the beowulf clusters of the mid-late 1990s (commodity linux arrays for research purposes) and now of course massively parallel arrays are commonplace, with extensive software support (mapreduce/hadoop).
He also promoted in the late 1970’s the need for physicists to be able to travel from institute to institute, not have to learn a new computer operating system at each place (in the pre-Apple/pre-PC days, every new computer had its own custom operating system), instead to have access to a uniform interface and network access to computer at home institution.
He was on the NSF taskforce that pushed for the implementation of the early NSFNet, and I was told by George Strawn (one of the people on the NSF side shepherding the process) that Ken was the key person who argued for using the TCP/IP (i.e., internet) protocol, rather than the DECnet protocol favored by many of the other physicists, and we know where that has led… (George also told me that the absolutely essential people who moved the NSFNet through the senate and house, respectively, were … Al Gore, Jr and Newt Gingrich)
Cornell is planning a commemorative event, likely to take place in the Oct/Nov timeframe.
p.s. had intended to link to a photo to recall the state of computing resources from that period:
finally got around to writing up after-dinner talk here:
Click to access 1407.1855.pdf
Pingback: How Quantum Field Theory Becomes “Effective” | Sean Carroll
Pingback: Kenneth G. Wilson (1936-2013) » Mano Singham
Pingback: Rapidinhas | Not A Science Blog
Frankly, I do not know anything about Wilson’s method.I have two questions. I understand Lattice gauge calculation is numerical. Is Wilson’s method useful to get approximate analytic answers in the non perturbative case, i.e to get actual numbers which you can compare with experiments rather than just high energy behavior etc? In the renormalizable perturbative case, is it easier than the usual Feynman diagram method? Does it give approximately the same answers? I will appreciate comments. Thanks.
Pingback: Del átomo al Higgs X: La libertad asintótica y la Cromodinámica Cuántica | Una vista circular
Pingback: Linking covariant and canonical LQG: new solutions to the Euclidean Scalar Constraint by Alesci, Thiemann, and Zipfel | quantumtetrahedron
The following article by Ken Wilson is commended to mathematicians especially (a Google search finds it on-line):
Our research in varietal dynamics and transport phenomena borrows extensively from the multiple-scale physics viewpoint that Wilson presents in this article.
Pingback: To become a good teacher, ignore everything you’re told and learn from the masters (part 4 of 4) | Quantum Frontiers
Pingback: The Physics of PageRank | Continuous Deformation
Pingback: Why is the hierarchy problem a problem? | Page 2
Reblogged this on Quaerere Propter Vērum and commented:
Some truth in Wilsonian Physics…
Pingback: Naturalness: dimensionless ratios | Physics Forums
Pingback: Naturalness: dimensionless ratios | Physics Forums
Pingback: Visualizing the Attached EM Field for Free Electron in QFT | Physics Forums
Pingback: What is the basic scheme of quantum field theories? | Page 2 | Physics Forums
Pingback: Kenneth Wilson, RIP - IND2906