The basic idea is this: we can measure things like the charge and mass of particles by making them collide in particle accelerators. But if we do this, something strange and interesting happens: unless we take into account some complicated corrections, the answers will depend on how hard we slam them into each other! The reason, roughly, is that every particle is surrounded by a cloud of virtual particles — and the harder we slam two particles into each other, the deeper into this cloud we see.

That was very vague. To make it precise takes some work. To do it in a limited amount of time, I'm gonna assume you vaguely know what a Lagrangian for a quantum field theory looks like, and that you know how different terms in the Lagrangian correspond to different sorts of particle interactions, which can be drawn as "vertices" of Feynman diagrams.

That sounds pretty technical. Luckily, you won't need to understand this
stuff in any detail — if you've read Feynman's cute little
popular book called *QED: Strange Theory of Matter and Light*,
you're probably ready for what I'm about to say.

In what follows, you have to keep your eye on the parameters in the theory: I'm gonna keep shuffling them around, so to check that I'm not conning you, you have to make sure there's always the same number of 'em around — sort of like watching a magician playing a shell game. So make sure you see what we're starting with! Our Lagrangian has some numbers in it called "coupling constants", but our theory really has one more parameter: the cutoff scale D.

Now our Lagrangian has some coupling constants in it, but it's hard
to measure these directly. Even though they have names like
"mass", "charge" and so on, these parameters aren't what you
*directly* measure by colliding particles in an accelerator.
In fact, if you try to measure the charge of the electron (say)
by smashing two electrons into each other in an accelerator,
seeing how much they repel each other, and naively using the
obvious formula to determine their charge, the answer you get
will depend on their momenta in the center-of-mass frame — or
in other words, how hard you smashed them into each other.
The same is true for the electron mass and any other coupling
constants there are in the Lagrangian of our theory. They have a "bare"
value — the value that appears in the Lagrangian — and a
"physical" value — the value you measure by doing an experiment
and an obvious naive sort of calculation. The "physical"
values depend on the "bare" values, the cutoff D, and a momentum
scale p.

(Of course, we could cleverly try to use a less naive formula to determine the bare values of the coupling constants from experiment, but let's not do that — let's just use the stupid obvious formula that neglects the funky quantum effects that are making the physical values differ from the bare values! By being deliberately "naive" here, we're actually being very smart here — as you'll eventually see.)

There are all sorts of games we can play now. The simplest, oldest game is this. We can measure the physical coupling constants at some momentum scale p, and then figure out which bare coupling constants would give these physical values — assuming some cutoff D. Then we can try to take a limit as D → 0, adjusting the bare coupling constants as we take the limit, in order to keep the predicted physical coupling constants at their experimentally determined values. This "continuum limit", if it exists, will be a theory without any shortest distance scale in it. That's very important if you think spacetime is a continuum!

This game is called "renormalization".

Sometimes you win this game — and sometimes you lose. The main thing
to worry about is this: even if certain bare coupling constants
are *zero*, the corresponding physical coupling constants may be
*nonzero*. For example, if you start with a Lagrangian in which
the mass of some particle is zero, you might not have bothered
to include that mass among your bare coupling constants. But its physical
mass (measured at some momentum scale) can still be nonzero. In
this case, we say the particle "acquires a mass through its interactions
with other particles". This sort of thing happens all the time.

What this means is that to succeed in adjusting the bare coupling constants to fit the experimentally observed physical coupling constants, we need to start with a Lagrangian that has enough bare coupling constants to begin with. You can't expect to fit N numbers with fewer than N numbers!

So, if someone hands you a Lagrangian, you may have to stick in some extra terms with some extra bare coupling constants before playing the renormalization game. If you can succeed with only finitely many extra terms, you say your theory is "renormalizable". If you need infinitely many terms, you throw up your hands in despair and say the theory is "nonrenormalizable". A nonrenormalizable Lagrangian is like a hydra-headed monster that keeps needing more extra terms to be added the more you add.

Note: when we try to take the continuum limit, we don't care if the bare coupling constants do something screwy like go to infinity. All we care about is whether the experimental predictions of our theory converge. If the bare coupling constants converge we say our theory is "finite". But truly realistic theories usually aren't this nice.

If you want to see how to tell whether a given term in your Lagrangian is renormalizable or not, click here and I'll work you through an example called the "4-fermion interaction". This stuff is a wee bit more technical... if you don't want things to get technical, just keep reading here!

So, let's recall what we've got. We have a quantum field theory described by a Lagrangian with a bunch of coupling constants in it — let's call them "bare" coupling constants. We can write all these bare coupling constants in a list and think of it as a vector: call it C. But to do calculations with this theory we need one more number, too: we need to ignore effects going on at length scales smaller than some distance D, called the "cutoff".

Now, starting from these numbers, we can compute the "physical" coupling constants at any momentum scale. For example, the measured charge of the electron depends on the momentum with which we collide two electrons. Another way to put it is that the physical coupling constants depend on a distance scale: for example, the measured charge of the electron depends on the distance at which you measure its charge. These two ways of thinking about it are equivalent, since using hbar and c we can freely convert between momentum and inverse distance.

Let's work with distance instead of momentum, and call the distance at which we measure the physical coupling constants D'.

So: if we know the "bare" coupling constants C and the cutoff D, we can compute the "physical" coupling constants C' at any distance scale D'. In short:

C' = f(C,D,D')

Now let's play the "renormalization group" game. In this game, we fix the bare coupling constants and the cutoff, and see how the physical coupling constants C' change as we vary the distance scale D' at which we measure them. It's fun to imagine turning a dial to adjust the distance scale D' and watching the physical coupling constants C' move around like a little dot in n-dimensional space, where n is the number of coupling constants. People draw pictures of this and speak of "running coupling constants" or the "renormalization group flow".

Note that we can play this game whether or not our field theory is renormalizable! In the last section I talked about a different game, called "renormalization". That game was all about letting the cutoff D go to zero. For "renormalizable" theories there's a nice way to do it, while for "nonrenormalizable" ones it's a real mess. But here we aren't letting D go to zero.

So what happens if we start with a nonrenormalizable theory and play
this "renormalization group" game? Our Lagrangian will typically have a
bunch of terms in it: some nasty ones that are making the theory
nonrenormalizable, and some nice ones that would give a renormalizable
theory if we just threw out the nasty ones. Each of these terms is
multiplied by a coupling constant. Now let's look at the
corresponding *physical* coupling constants as we crank up the
distance scale D'.

As we do this, the physical coupling constants in front of the nasty nonrenormalizable terms get smaller and smaller, approaching zero! At large distances, nonrenormalizable interactions become irrelevant!

This is an incredibly important fact, because it may explain why the
quantum field theory that seems to describe our world — the Standard
Model — is renormalizable. There may be all sorts of strange quantum
gravity stuff going on at very short distance scales — perhaps spacetime
is not even a continuum! But if at larger scales we assume that ordinary
quantum field theory on flat spacetime is a reasonably accurate
approximation to what's going on, then this renormalization group stuff
assures us that at still *larger* scales, nonrenormalizable interactions
are going to look very weak.

In fact, this may explain why gravity is so weak! If we treat quantum gravity perturbatively as a quantum field theory on flat spacetime, it's nonrenormalizable. If we assume the gravitational constant is reasonably large near the Planck scale, and we follow the renormalization group flow, we find that it's very small at macroscopic distance scales. In fact, we even get the right order of magnitude. But this isn't surprising: it's really just the magic of dimensional analysis.

This sort of idea goes back to Kenneth Wilson who won the Nobel prize in physics in 1982, for work he did around 1972 on the renormalization group and critical points in statistical mechanics. His ideas are now important not only in statistical mechanics but also in quantum field theory. For a nice short summary of the "Wilsonian philosophy of renormalization", let me paraphrase Peskin and Schroeder:

In Chapter 10 we took the philosophy that the distance cutoff D should be disposed of by taking the limit D → 0 as quickly as possible. We found that this limit gives well-defined predictions only if the Lagrangian contains no coupling constants with dimensions of length^{d}with d > 0. From this viewpoint, it seemed exceedingly fortunate that quantum electrodynamics, for example, contained no such coupling constants since otherwise this theory would not yield well-defined predictions.Wilson's analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson's arguments show that this this circumstance

explainsthe renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.

In the last section I described the "renormalization group" game. Now I want to explain "ultraviolet and infrared fixed points" of the renormalization group, but first let me summarize what I already said. We have a quantum field theory described by a Lagrangian with a bunch of terms multipled by numbers called "bare" coupling constants — we call the list of all of them C. We ignore effects going on at length scales smaller than some distance D called the "cutoff". And now we can compute stuff....

In particular, we can compute the so-called "physical" coupling constants C' as measured at any given length scale D'. And we can watch how C' changes as we slowly crank D' up. This is called the "renormalization group flow".

Various things can happen. I already said a bit about this: I said that for nonrenormalizable terms in the Lagrangian, the physical coupling constants shrink as we increase D'.

In fact we can say more: they scale roughly like D' to some negative power. If you're smart, you can even guess what this power is by staring at the term in question and doing some dimensional analysis! Using Planck's constant and the speed of light you can express all units in terms of length. If a particular bare coupling constant c in front of some term in the Lagrangian has dimensions of length to the power d, then the corresponding physical constant c' will scale roughly like D' to the power -d. More precisely:

c'/c ~ (D'/D)^{-d}

In particular, this term will be nonrenormalizable if d is greater than zero. (For a more thorough explanation of this criterion, and also some loopholes, click here.)

Of course, another way to put this is that for nonrenormalizable
theories, the physical coupling constants *grow* as we *decrease*
D'. This is another way to see why nonrenormalizable theories
are "bad" — they involve interactions that get ridiculously
strong at short distance scales. Why is this bad? Well, it's
certainly bad if you're trying to do perturbation theory and
think of the interaction as a small perturbation. It may not
always be bad in any more profound sense, because there *are*
nonrenormalizable theories that are perfectly consistent,
mathematically speaking.

On the other hand, if d is less than zero we say our term
in the Lagrangian is "superrenormalizable". In this case the physical
coupling constant scales roughly like D' to some *positive* power.
In the same sense that nonrenormalizable theories are not nice,
superrenormalizable theories are super-nice.

Finally, for
"renormalizable" theories, the physical coupling constants scale
roughly like D to the zeroth power — i.e., they're roughly constant.
They are right on the brink between nasty and nice. We actually
have to do a more careful analysis to see if they are nasty or
nice. For example, quantum electrodynamics is renormalizable,
but it turns out to be nasty: at first the charge of the electron
looks almost constant as we decrease D', but it actually grows —
logarithmically at first, but then faster and faster. On the
other hand, lots of nonabelian gauge theories are nice: the
coupling constant slowly *shrinks to zero* as we decrease D'.
We say they are "asymptotically free".

Now, to get ready for my explanation about what all this has to do with 2nd-order phase transitions, let's just introduce some concepts to help us tie all these ideas together. We've seen that sometimes when we keep making D' smaller and smaller, the physical coupling constants C' approach some particular value. I've just talked about the case when they approach zero, but other cases are important too! Whenever this sort of thing happens, we say the limiting value of C' is an "ultraviolet fixed point of the renormalization group". Here "ultraviolet" refers to the fact that we are looking at ever smaller distance scales.

Similarly, if C' approaches some value when D' keeps getting larger, we say that value is an "infrared fixed point".

For example, suppose we have a superrenormalizable or
asymptotically free theory with just one coupling constant.
Then as we keep making D' smaller, the physical coupling
constant approaches zero, so zero is an ultraviolet fixed
point. Of course "zero" here corresponds to a *free*
field theory with no interaction at all. So free theories
are ultraviolet fixed points of superrenormalizable or
asymptotically free theories. Similarly, free theories
are infrared fixed points of nonrenormalizable theories, and
certain renormalizable but nasty theories like quantum
electrodynamics.

Actually, *first* of all, what's a *first*-order
phase transition?

The most familiar examples are when ice melts or liquid water boils: we have two phases of matter, and the internal energy changes discontinuously as we go from one phase to another. But look at this phase diagram, which I borrowed from Scott Lanning:

\ \ liquid X (critical point) | \ / ^ | solid \ / | | \ / P | / R | / E | / S | / S | / gas U |/ R | E |_______________________________________ TEMPERATURE --→We see something interesting: the sharp boundary between liquid and gas phases fizzles out at a point called the "critical point". Above this point there is no real difference between a liquid and gas! This critical point is a "2nd-order phase transition", because while the internal energy doesn't change discontinuously there, its first derivative becomes infinite there.

Right at the critical point, something very cool happens: the system transforms in a simple way under scaling! What does this mean? Well, if you get some water right at the critical point, it looks "opalescent" like a moonstone. If you stare at it carefully, you'll see a bunch of liquid water droplets of all different sizes floating around in steam. However, if you look closely at any of these droplets, you see they are full of bubbles of steam, and if you look closely at the steam, you see it's full of little droplets of liquid! It's like a random fractal: no matter how closely you look, you see the same thing. You can't tell if you're looking at water droplets in steam or bubbles of steam in water, and there is no distinguished length scale... at least until you get down to the scale of atoms, that is.

Building on insights due to Landau, Kadanoff and others, Wilson realized that you could come up with a very precise theory of critical points by taking advantage of this symmetry under change of scale. In particular, this theory lets us understand so-called "critical exponents".

To explain this, let me switch to a simpler example of a critical point. Consider a ferromagnet like a crystal of iron. At temperatures above a certain point called the Curie temperature, the iron will not be magnetized. But as we cool it below the Curie point the spins of certain electrons in the atoms will line up and the iron will become magnetized. If there is an external magnetic field around when we cool the iron below the Curie temperature, the spins will line up with this magnetic field. Suppose the magnetic field points along the z axis - either up or down. Then we have the following phase diagram:

^ | | | M | A | magnetized up G | N | E | T | ----------TEMPERATURE-->---------X unmagnetized I | (critical C | point) | F | magnetized down I | E | L | D |

The sharp boundary between the "up" and "down" magnetized phases fizzles out at the Curie temperature. The Curie temperature is a critical point! Right at this critical point the magnet displays symmetry under scaling. If we look at the atoms in the crystal lattice and see which ones are "spin-up" and which ones are "spin-down", at the critical point we see regions of spin up and regions of spin down, but all these regions are speckled with smaller regions of the opposite type, and so on... on down to the length scale set by the crystal lattice itself.

To describe this scaling symmetry a bit more mathematically, let's simplify things a bit and imagine that for each point x in the crystal lattice we have a variable s(x) which equals 1 if that atom is spin-up and -1 if it's spin-down. When the crystal is in thermal equilibrium this variable keeps randomly flipping sign, so we can think of it as a random variable. This means we can talk about its mean, standard deviation and stuff like that.

When the external magnetic field is zero, the mean of s(x) is zero:

<s(x)> = 0

because each atom has a 50-50 chance of being spin-up or spin-down.
This isn't particularly interesting. What's interesting is the
mean of the *product* of s(x) and s(y) for two different points in
the lattice, x and y:

<s(x)s(y)>

This is called a "2-point function". It measures the *correlation* of
spins at different points in the lattice, since it equals 1 if the two
spins always point the same way and 0 if they are completely uncorrelated.

The 2-point function only depends on the distance between x and y. Away from the critical point it decays exponentially with distance (at least when the external magnetic field is zero), and this exponental decay determines a special length scale called the "correlation length":

<s(x)s(y)> ~ exp(-|x-y|/L)

But as we approach the critical point, the correlation length goes
to infinity, and right *at* the critical point, the 2-point function
decays like some power of distance:

<s(x)s(y)> ~ 1/|x-y|^{d}

The number d is an example of what we call a "critical exponent".

A similar thing is true for all the higher "n-point functions", at least if we define them correctly, which I won't bother to do here. They all satisfy nice power laws at the critical point. This is what people mean when they say that a system at a critical point transforms simply under scaling.

Now, I'm oversimplifying something important here, so I'd better explain it. These power laws like

<s(x)s(y)> ~ 1/|x-y|^{d}

are really only approximate! Actually this is obvious, because
the left hand side can't get bigger than 1, while the right hand
side goes to infinity as |x-y| goes to zero. In reality, the
the 2-point function behaves in a very complicated way when the
distance between our two atoms is very small. It's only when the
distance gets *big* that things simplify and the power law becomes
a better and better approximation.

Does this remind you of anything?

It should: this is where the renormalization group comes in! We can imagine "zooming out" on our crystal, looking at it from ever larger distance scales. As we do, things simplify: we can forget about individual atoms and approximate the situation by a field theory defined in the continuum. In fact, we can try to use one of the field theories that we've been talking about in the previous sections! Remember, quantum field theory in Euclidean space is just the same as statistical mechanics. Quantum field theory needs a cutoff, but we've got one: the distance between atoms in our crystal. So we're all set: we can write down some Lagrangian and start playing the renormalization group game to see what happens as we zoom out.

You may be suspicious here: how are we ever going to guess which Lagrangian corresponds to our original problem involving a crystal of iron? After all, iron is complicated stuff!

Luckily, it's not so bad. At short distance scales, to get a decent approximation to our original problem, we may need to start with a really complicated Lagrangian. However, suppose we do this. Then as we zoom out to large distance scales, the renormalization group game says that the Lagrangian will simplify. For example, we've already seen that nonrenormalizable terms in the Lagrangian become "irrelevant" as we go to large distance scales: the physical coupling constants in front of them go to zero!

More generally, we shouldn't be at all surprised if our physical coupling constants approach an infrared fixed point as we zoom out, letting the distance scale approach infinity. This is exactly what infrared fixed points are all about! Even better, all sorts of theories with different bare coupling constants can approach the same infrared fixed point. We say two different theories, or two different physical systems, are in the same "universality class" if they approach the same infrared fixed point as we crank up the distance scale.

For example, when we're studying what happens at the Curie temperature,
lots of different ferromagnets lie in the same universality class.
Indeed, it turns out that you can study a lot of them using slight
variations of one of the simplest quantum field theories of all: the
φ^{4} theory.

There is a lot more to say, and I'm too tired to say most of it,
but there's one thing I *must* tell you, just to wrap up some loose
ends. Wilson's real triumph was to calculate critical exponents like the
number d in the power law for the 2-point function:

<s(x)s(y)> ~ 1/|x-y|^{d}

How did he do it? Well, Landau already had one way to do this, which
gives just the results you would guess using dimensional analysis. But
that method didn't always give the right answers. To get the right
answers, it helps to realize that n-point functions are closely related
to physical coupling constants. In fact, while I never actually
*defined* the physical coupling constants, they are really just a way of
extracting some information about n-point functions. So if we calculate
the "running of coupling constants" using the renormalization group
game, we can work out the critical exponents.

To actually do this, it helps to use something called the "Callan-Symanzik equation", but I'm not going to explain this — for this, you should probably read a book on quantum field theory. But don't worry about this too much; be happy if you feel you get my main point here, which is that:

2ND-ORDER PHASE TRANSITIONS CORRESPOND TO INFRARED FIXED POINTS
OF THE RENORMALIZATION GROUP!

For more on the Callan-Symanzik equation, and an application to quantum gravity, try "week139" in my series This Week's Finds.

© 2009 John Baez

baez@math.removethis.ucr.andthis.edu