February 1, 2017

Information Geometry (Part 16)

John Baez

This week I'm giving a talk on biology and information:

While preparing this talk, I discovered a cool fact. I doubt it's new, but I haven't exactly seen it elsewhere. I came up with it while trying to give a precise and general statement of 'Fisher's fundamental theorem of natural selection'. I won't start by explaining that theorem, since my version looks rather different than Fisher's, and I came up with mine precisely because I had trouble understanding his. I'll say a bit more about this at the end.

Here's my version:

The square of the rate at which a population learns information is the variance of its fitness.

This is a nice advertisement for the virtues of diversity: more variance means faster learning. But it requires some explanation!

The setup

Let's start by assuming we have $n$ different kinds of self-replicating entities with populations $P_1, \dots, P_n.$ As usual, these could be all sorts of things:

I'll call them replicators of different species.

Let's suppose each population $P_i$ is a function of time that grows at a rate equal to this population times its 'fitness'. I explained the resulting equation back in Part 9, but it's pretty simple:

$$ \displaystyle{ \frac{d}{d t} P_i(t) = f_i(P_1(t), \dots, P_n(t)) \, P_i(t) } $$

Here $f_i$ is a completely arbitrary smooth function of all the populations! We call it the fitness of the ith species.

This equation is important, so we want a short way to write it. I'll often write $f_i(P_1(t), \dots, P_n(t))$ simply as $f_i,$ and $P_i(t)$ simply as $P_i.$ With these abbreviations, which any red-blooded physicist would take for granted, our equation becomes simply this:

$$ \displaystyle{ \frac{dP_i}{d t} = f_i \, P_i } $$

Next, let $p_i(t)$ be the probability that a randomly chosen organism is of the ith species:

$$ \displaystyle{ p_i(t) = \frac{P_i(t)}{\sum_j P_j(t)} } $$

Starting from our equation describing how the populations evolve, we can figure out how these probabilities evolve. The answer is called the replicator equation:

$$ \displaystyle{ \frac{d}{d t} p_i(t) = ( f_i - \langle f \rangle ) \, p_i(t) }$$

Here $\langle f \rangle$ is the average fitness of all the replicators, or mean fitness:

$$ \displaystyle{ \langle f \rangle = \sum_j f_j(P_1(t), \dots, P_n(t)) \, p_j(t) } $$

In what follows I'll abbreviate the replicator equation as follows:

$$ \displaystyle{ \frac{dp_i}{d t} = ( f_i - \langle f \rangle ) \, p_i } $$

The result

Okay, now let's figure out how fast the probability distribution

$$ p(t) = \left(p_1(t), \dots, p_n(t)\right) $$

changes with time. For this we need to choose a way to measure the length of the vector

$$ \displaystyle{ \frac{dp}{dt} = \left(\frac{d}{dt} p_1(t), \dots, \frac{d}{dt} p_n(t)\right) } $$

And here information geometry comes to the rescue! We can use the Fisher information metric, which is a Riemannian metric on the space of probability distributions.

I've talked about the Fisher information metric in many ways in this series. The most important fact is that as a probability distribution $p(t)$ changes with time, its speed

$$ \displaystyle{ \left\| \frac{dp}{dt} \right\|} $$

as measured using the Fisher information metric can be seen as the rate at which information is learned. I'll explain that later. Right now I just want a simple formula for the Fisher information metric. Suppose $v$ and $w$ are two tangent vectors to the point $p$ in the space of probability distributions. Then the Fisher information metric is given as follows:

$$ \displaystyle{ \langle v, w \rangle = \sum_i \frac{1}{p_i} \, v_i w_i } $$

Using this we can calculate the speed at which $p(t)$ moves when it obeys the replicator equation. Actually the square of the speed is simpler:

$$ \begin{array}{ccl} \displaystyle{ \left\| \frac{dp}{dt} \right\|^2 } &=& \displaystyle{ \sum_i \frac{1}{p_i} \left( \frac{dp_i}{dt} \right)^2 } \\ & \\ &=& \displaystyle{ \sum_i \frac{1}{p_i} \left( ( f_i - \langle f \rangle ) \, p_i \right)^2 } \\ & \\ &=& \displaystyle{ \sum_i ( f_i - \langle f \rangle )^2 p_i } \end{array} $$

The answer has a nice meaning, too! It's just the variance of the fitness: that is, the square of its standard deviation.

So, if you're willing to buy my claim that the speed $\|dp/dt\|$ is the rate at which our population learns new information, then we've seen that the square of the rate at which a population learns information is the variance of its fitness!

Fisher's fundamental theorem

Now, how is this related to Fisher's fundamental theorem of natural selection? First of all, what is Fisher's fundamental theorem? Here's what Wikipedia says about it:

It uses some mathematical notation but is not a theorem in the mathematical sense. It states:
"The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time."

Or in more modern terminology:

"The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genetic variance in fitness at that time".
Largely as a result of Fisher's feud with the American geneticist Sewall Wright about adaptive landscapes, the theorem was widely misunderstood to mean that the average fitness of a population would always increase, even though models showed this not to be the case. In 1972, George R. Price showed that Fisher's theorem was indeed correct (and that Fisher's proof was also correct, given a typo or two), but did not find it to be of great significance. The sophistication that Price pointed out, and that had made understanding difficult, is that the theorem gives a formula for part of the change in gene frequency, and not for all of it. This is a part that can be said to be due to natural selection

Price's paper is here:

I don't find it very clear, perhaps because I didn't spend enough time on it. But I think I get the idea.

My result is a theorem in the mathematical sense, though quite an easy one. I assume a population distribution evolves according to the replicator equation and derive an equation whose right-hand side matches that of Fisher's original equation: the variance of the fitness.

But my left-hand side is different: it's the square of the speed of the corresponding probability distribution, where speed is measured using the 'Fisher information metric'. This metric was discovered by the same guy, Ronald Fisher, but I don't think he used it in his work on the fundamental theorem!

Something a bit similar to my statement appears as Theorem 2 of this paper:

and for that theorem he cites:

However, his Theorem 2 really concerns the rate of increase of fitness, like Fisher's fundamental theorem. Moreover, he assumes that the probability distribution $p(t)$ flows along the gradient of a function, and I'm not assuming that. Indeed, my version applies to situations where the probability distribution moves round and round in periodic orbits!

Relative information and the Fisher information metric

The key to generalizing Fisher's fundamental theorem is thus to focus on the speed at which $p(t)$ moves, rather than the increase in fitness. Why do I call this speed the 'rate at which the population learns information'? It's because we're measuring this speed using the Fisher information metric, which is closely connected to relative information, also known as relative entropy or the Kullback–Leibler divergence.

I explained this back in Part 7, but that explanation seems hopelessly technical to me now, so here's a faster one, which I created while preparing my talk.

The information of a probability distribution $q$ relative to a probability distribution $p$ is

$$ \displaystyle{ I(q,p) = \sum_{i =1}^n q_i \ln\left(\frac{q_i}{p_i}\right) } $$

It says how much information you learn if you start with a hypothesis $p$ saying that the probability of the ith situation was $p_i,$ and then update this to a new hypothesis $q.$

Now suppose you have a hypothesis that's changing with time in a smooth way, given by a time-dependent probability $p(t).$ Then a calculation shows that

$$ \displaystyle{ \left.\frac{d}{dt} I(p(t),p(t_0)) \right|_{t = t_0} = 0 } $$

for all times $t_0$. This seems paradoxical at first. I like to jokingly put it this way:

To first order, you're never learning anything.

However, as long as the velocity $\frac{d}{dt}p(t_0)$ is nonzero, we have

$$ \displaystyle{ \left.\frac{d^2}{dt^2} I(p(t),p(t_0)) \right|_{t = t_0} > 0 } $$

so we can say

To second order, you're always learning something... unless your opinions are fixed.

This lets us define a 'rate of learning'---that is, a 'speed' at which the probability distribution $p(t)$ moves. And this is precisely the speed given by the Fisher information metric!

In other words:

$$ \displaystyle{ \left\|\frac{dp}{dt}(t_0)\right\|^2 = \left.\frac{d^2}{dt^2} I(p(t),p(t_0)) \right|_{t = t_0} } $$

where the length is given by Fisher information metric. Indeed, this formula can be used to define the Fisher information metric. From this definition we can easily work out the concrete formula I gave earlier.

In summary: as a probability distribution moves around, the relative information between the new probability distribution and the original one grows approximately as the square of time, not linearly. So, to talk about a 'rate at which information is learned', we need to use the above formula, involving a second time derivative. This rate is just the speed at which the probability distribution moves, measured using the Fisher information metric. And when we have a probability distribution describing how many replicators are of different species, and it's evolving according to the replicator equation, this speed is also just the variance of the fitness!


For a paper based on this article, see: You can read a discussion of this article on Azimuth, and make your own comments or ask questions there!


© 2017 John Baez
baez@math.removethis.ucr.andthis.edu
home