March 2, 2011

Information Geometry (Part 7)

John Baez

Today, I want to describe how the Fisher information metric is related to relative entropy. I've explained both these concepts separately (click the links for details); now I want to put them together.

But first, let me explain what this whole series of blog posts is about. Information geometry, obviously! But what's that?

Information geometry is the geometry of 'statistical manifolds'. Let me explain that concept twice: first vaguely, and then precisely.

Vaguely speaking, a statistical manifold is a manifold whose points are hypotheses about some situation. For example, suppose you have a coin. You could have various hypotheses about what happens when you flip it. For example: you could hypothesize that the coin will land heads up with probability $x$, where $x$ is any number between 0 and 1. This makes the interval $[0,1]$ into a statistical manifold. Technically this is a manifold with boundary, but that's okay.

Or, you could have various hypotheses about the IQ's of American politicians. For example: you could hypothesize that they're distributed according to a Gaussian probability distribution with mean $x$ and standard deviation $y$. This makes the space of pairs $(x,y)$ into a statistical manifold. Of course we require $y \ge 0$, which gives us a manifold with boundary. We might also want to assume $x \ge 0$, which would give us a manifold with corners, but that's okay too. We're going to be pretty relaxed about what counts as a 'manifold' here.

If we have a manifold whose points are hypotheses about some situation, we say the manifold 'parametrizes' these hypotheses. So, the concept of statistical manifold is fundamental to the subject known as parametric statistics.

Parametric statistics is a huge subject! You could say that information geometry is the application of geometry to this subject.

But now let me go ahead and make the idea of 'statistical manifold' more precise. There's a classical and a quantum version of this idea. I'm working at the Centre of Quantum Technologies, so I'm being paid to be quantum—but today I'm in a classical mood, so I'll only describe the classical version. Let's say a classical statistical manifold is a smooth function $p$ from a manifold $M$ to the space of probability distributions on some measure space $\Omega$.

We should think of $\Omega$ as a space of events. In our first example, it's just $\{H, T\}$: we flip a coin and it lands either heads up or tails up. In our second it's $\mathbb{R}$: we measure the IQ of an American politician and get some real number.

We should think of $M$ as a space of hypotheses. For each point $x \in M$, we have a probability distribution $p_x$ on $\Omega$. This is hypothesis about the events in question: for example "when I flip the coin, there's 55% chance that it will land heads up", or "when I measure the IQ of an American politician, the answer will be distributed according to a Gaussian with mean 0 and standard deviation 100."

Now, suppose someone hands you a classical statistical manifold $(M,p)$. Each point in $M$ is a hypothesis. Apparently some hypotheses are more similar than others. It would be nice to make this precise. So, you might like to define a metric on $M$ that says how 'far apart' two hypotheses are. People know lots of ways to do this; the challenge is to find ways that have clear meanings.

Last time I explained the concept of relative entropy. Suppose we have two probability distributions on $\Omega$, say $p$ and $q$. Then the entropy of $p$ relative to $q$ is the amount of information you gain when you start with the hypothesis $q$ but then discover that you should switch to the new improved hypothesis $p$. It equals:

$$ \int_\Omega \; \frac{p}{q} \; \ln\left(\frac{p}{q}\right) \; q d \omega $$

You could try to use this to define a distance between points $x$ and $y$ in our statistical manifold, like this:

$$ S(x,y) = \int_\Omega \; \frac{p_x}{p_y} \; \ln\left(\frac{p_x}{p_y}\right) \; p_y d \omega $$

This is definitely an important function. Unfortunately, as I explained last time, it doesn't obey the axioms that a distance function should! Worst of all, it doesn't obey the triangle inequality.

Can we 'fix' it? Yes, we can! And when we do, we get the Fisher information metric, which is actually a Riemannian metric on $M$. Suppose we put local coordinates on some patch of $M$ containing the point $x$. Then the Fisher information metric is given by:

$$ g_{ij}(x) = \int_\Omega \partial_i (\ln p_x) \; \partial_j (\ln p_x) \; p_x d \omega$$

You can think of my whole series of articles so far as an attempt to understand this funny-looking formula. I've shown how to get it from a few different starting-points, most recently back in Part 3. But now let's get it starting from relative entropy!

Fix any point in our statistical manifold and choose local coordinates for which this point is the origin, $0$. The amount of information we gain if move to some other point $x$ is the relative entropy $S(x,0)$. But what's this like when $x$ is really close to $0$? We can imagine doing a Taylor series expansion of $S(x,0)$ to answer this question.

Surprisingly, to first order the answer is always zero! Mathematically:

$$ \partial_i S(x,0)|_{x = 0} = 0$$

In plain English: if you change your mind slightly, you learn a negligible amount — not an amount proportional to how much you changed your mind.

This must have some profound significance. I wish I knew what. Could it mean that people are reluctant to change their minds except in big jumps?

Anyway, if you think about it, this fact makes it obvious that $S(x,y)$ can't obey the triangle inequality. $S(x,y)$ could be pretty big, but if we draw a curve from $x$ and $y$, and mark $n$ closely spaced points $x_i$ on this curve, then $S(x_{i+1}, x_i)$ is zero to first order, so it must be of order $1/n^2$, so if the triangle inequality were true we'd have

$$ S(x,y) \le \sum_i S(x_{i+i},x_i) \le \mathrm{const} \, n \cdot \frac{1}{n^2}$$

for all $n$, which is a contradiction.

In plain English: if you change your mind in one big jump, the amount of information you gain is more than the sum of the amounts you'd gain if you change your mind in lots of little steps! This seems pretty darn strange, but the paper I mentioned in part 1 helps:

• Gavin E. Crooks, Measuring thermodynamic length.

You'll see he takes a curve and chops it into lots of little pieces as I just did, and explains what's going on.

Okay, so what about second order? What's

$$ \partial_i \partial_j S(x,0)|_{x = 0} ?$$

Well, this is the punchline of this blog post: it's the Fisher information metric:

$$ \partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$$

And since the Fisher information metric is a Riemannian metric, we can then apply the usual recipe and define distances in a way that obeys the triangle inequality. Crooks calls this distance thermodynamic length in the special case that he considers, and he explains its physical meaning.

Now let me prove that

$$ \partial_i S(x,0)|_{x = 0} = 0$$

and

$$ \partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$$

This can be somewhat tedious if you do it by straighforwardly grinding it out—I know, I did it. So let me show you a better way, which requires more conceptual acrobatics but less brute force.

The trick is to work with the universal statistical manifold for the measure space $\Omega$. Namely, we take $M$ to be the space of all probability distributions on $\Omega$! This is typically an infinite-dimensional manifold, but that's okay: we're being relaxed about what counts as a manifold here. In this case, we don't need to write $p_x$ for the probability distribution corresponding to the point $x \in M$. In this case, a point of $M$ just is a probability distribution on $\Omega$, so we'll just call it $p$.

If we can prove the formulas for this universal example, they'll automatically follow for every other example, by abstract nonsense. Why? Because any statistical manifold with measure space $\Omega$ is the same as a manifold with a smooth map to the universal statistical manifold! So, geometrical structures on the universal one 'pull back' to give structures on all the rest. The Fisher information metric and the function $S$ can be defined as pullbacks in this way! So, to study them, we can just study the universal example.

(If you're familiar with 'classifying spaces for bundles' or other sorts of 'classifying spaces', all this should seem awfully familiar. It's a standard math trick.)

So, let's prove that

$$ \partial_i S(x,0)|_{x = 0} = 0$$

by proving it in the universal example. Given any probability distribution $q$, and taking a nearby probability distribution $p$, we can write

$$ \frac{p}{q} = 1 + f $$

where $f$ is some small function. We only need to show that $S(p,q)$ is zero to first order in $f$. And this is pretty easy. By definition:

$$ S(p,q) = \int_\Omega \; \frac{p}{q} \, \ln\left(\frac{p}{q}\right) \; q d \omega $$

or in other words,

$$ S(p,q) = \int_\Omega \; (1 + f) \, \ln(1 + f) \; q d \omega $$

We can calculate this to first order in $f$ and show we get zero. But let's actually work it out to second order, since we'll need that later:

$$ \ln (1 + f) = f - \frac{1}{2} f^2 + \cdots $$

so

$$ (1 + f) \, \ln (1+ f) = f + \frac{1}{2} f^2 + \cdots $$

so

$$ \begin{aligned} S(p,q) &=& \int_\Omega \; (1 + f) \; \ln(1 + f) \; q d \omega \\ &=& \int_\Omega f \, q d \omega + \frac{1}{2} \int_\Omega f^2\, q d \omega + \cdots \end{aligned} $$

Why does this vanish to first order in $f$? It's because $p$ and $q$ are both probability distributions and $p/q = 1 + f$, so

$$ \int_\Omega (1 + f) \, q d\omega = \int_\Omega p d\omega = 1$$

but also

$$ \int_\Omega q d\omega = 1$$

so subtracting we see

$$ \int_\Omega f \, q d\omega = 0$$

So, $S(p,q)$ vanishes to first order in $f$. Voilà!

Next let's prove the more interesting formula:

$$ \partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$$

which relates relative entropy to the Fisher information metric. Since both sides are symmetric matrices, it suffices to show their diagonal entries agree in any coordinate system:

$$ \partial^2_i S(x,0)|_{x = 0} = g_{ii}$$

Devoted followers of this series of posts will note that I keep using this trick, which takes advantage of the polarization identity.

To prove

$$ \partial^2_i S(x,0)|_{x = 0} = g_{ii}$$

it's enough to consider the universal example. We take the origin to be some probability distribution $q$ and take $x$ to be a nearby probability distribution $p$ which is pushed a tiny bit in the $i$th coordinate direction. As before we write $p/q = 1 + f$. We look at the second-order term in our formula for $S(p,q)$:

$$ \frac{1}{2} \int_\Omega f^2\, q d \omega $$

Using the usual second-order Taylor's formula, which has a $\frac{1}{2}$ built into it, we can say

$$ \partial^2_i S(x,0)|_{x = 0} = \int_\Omega f^2\, q d \omega $$

On the other hand, our formula for the Fisher information metric gives

$$ g_{ii} = \left. \int_\Omega \partial_i \ln p \; \partial_i \ln p \; q d \omega \right|_{p=q} $$

The right hand sides of the last two formulas look awfully similar! And indeed they agree, because we can show that

$$ \left. \partial_i \ln p \right|_{p = q} = f$$

How? Well, we assumed that $p$ is what we get by taking $q$ and pushing it a little bit in the $i$th coordinate direction; we have also written that little change as

$$ p/q = 1 + f$$

for some small function $f$. So,

$$ \partial_i (p/q) = f$$

and thus:

$$ \partial_i p = f q$$

and thus:

$$ \partial_i \ln p \; = \; \frac{\partial_i p}{p} \; = \; \frac{fq}{p}$$

so

$$ \left. \partial_i \ln p \right|_{p=q} = f$$

as desired.

This argument may seem a little hand-wavy and nonrigorous, with words like 'a little bit'. If you're used to taking arguments involving infinitesimal changes and translating them into calculus (or differential geometry), it should make sense. If it doesn't, I apologize. It's easy to make it more rigorous, but only at the cost of more annoying notation, which doesn't seem good in a blog post.

Boring technicalities

If you're actually the kind of person who reads a section called 'boring technicalities', I'll admit to you that my calculations don't make sense if the integrals diverge, or we're dividing by zero in the ratio $p/q$. To avoid these problems, here's what we should do. Fix a $\sigma$-finite measure space $(\Omega, d\omega)$. Then, define the universal statistical manifold to be the space $P(\Omega,d \omega)$ consisting of all probability measures that are equivalent to $d\omega$, in the usual sense of measure theory. By Radon-Nikodym, we can write any such measure as $q d \omega$ where $q \in L^1(\Omega, d\omega)$. Moreover, given two of these guys, say $p d \omega$ and $q d\omega$, they are absolutely continuous with respect to each other, so we can write

$$ p d \omega = \frac{p}{q} \; q d \omega $$

where the ratio $p/q$ is well-defined almost everywhere and lies in $L^1(\Omega, q d\omega)$. This is enough to guarantee that we're never dividing by zero, and I think it's enough to make sure all my integrals converge.

We do still need to make $P(\Omega,d \omega)$ into some sort of infinite-dimensional manifold, to justify all the derivatives. There are various ways to approach this issue, all of which start from the fact that $L^1(\Omega, d\omega)$ is a Banach space, which is about the nicest sort of infinite-dimensional manifold one could imagine. Sitting in $L^1(\Omega, d\omega)$ is the hyperplane consisting of functions $q$ with

$$ \int_\Omega q d\omega = 1$$

and this is a Banach manifold. To get $P(\Omega,d \omega)$ we need to take a subspace of that hyperplane. If this subspace were open then $P(\Omega,d \omega)$ would be a Banach manifold in its own right. I haven't checked this yet, for various reasons.

For one thing, there's a nice theory of 'diffeological spaces', which generalize manifolds. Every Banach manifold is a diffeological space, and every subset of a diffeological space is again a diffeological space. For many purposes we don't need our 'statistical manifolds' to be manifolds: diffeological spaces will do just fine. This is one reason why I'm being pretty relaxed here about what counts as a 'manifold'.

For another, I know that people have worked out a lot of this stuff, so I can just look things up when I need to. And so can you! This book is a good place to start:

• Paolo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, Algebraic and Geometric Methods in Statistics, Cambridge U. Press, Cambridge, 2009.

I find the chapters by Raymond Streater especially congenial. For the technical issue I'm talking about now it's worth reading section 14.2, "Manifolds modelled by Orlicz spaces", which tackles the problem of constructing a universal statistical manifold in a more sophisticated way than I've just done. And in chapter 15, "The Banach manifold of quantum states", he tackles the quantum version!


You can read a discussion of this article on Azimuth, and make your own comments or ask questions there!


© 2011 John Baez
baez@math.removethis.ucr.andthis.edu
home