
It's been a long time since you've seen an installment of the Information Geometry series on this blog. If you recall, this series turned out to be largely about relative entropy and how it changes in evolutionary games. Some of what we said is summarized and carried further here:
But now Blake has a new paper, and I want to talk about that:
• Blake Pollard, Open Markov processes: a compositional perspective on nonequilibrium steady states in biology, Entropy 18 (2016), 140.
I'll focus on just one aspect: the principle of minimum entropy production. This is an exciting yet controversial principle in nonequilibrium thermodynamics. Blake examines it in a situation where we can tell exactly what's happening.
Life exists away from equilibrium. Left isolated, systems will tend toward thermodynamic equilibrium. However, biology is about open systems: physical systems that exchange matter or energy with their surroundings. Open systems can be maintained away from equilibrium by this exchange. This leads to the idea of a nonequilibrium steady state — a state of an open system that doesn't change, but is not in equilibrium.
A simple example is a pan of water sitting on a stove. Heat passes from the flame to the water and then to the air above. If the flame is very low, the water doesn't boil and nothing moves. So, we have a steady state, at least approximately. But this is not an equilibrium, because there is a constant flow of energy through the water.
Of course in reality the water will be slowly evaporating, so we don't really have a steady state. As always, models are approximations. If the water is evaporating slowly enough, it can be useful to approximate the situation with a nonequilibrium steady state.
There is much more to biology than steady states. However, to dip our toe into the chilly waters of nonequilibrium thermodynamics, it is nice to start with steady states. And already here there are puzzles left to solve.
Ilya Prigogine won the Nobel prize for his work on nonequilibrium thermodynamics. One reason is that he had an interesting idea about steady states. He claimed that under certain conditions, a nonequilibrium steady state will minimize entropy production!
There has been a lot of work trying to make the 'principle of minimum entropy production' precise and turn it into a theorem. In this book:
the authors give an argument for the principle of minimum entropy production based on four conditions:
The last condition is obviously the subtlest one; it's sometimes called Onsager reciprocity, and people have spent a lot of time trying to derive it from other conditions.
However, Blake goes in a different direction. He considers a concrete class of open systems, a very large class called 'open Markov processes'. These systems obey the first three conditions listed above, and the 'detailed balanced' open Markov processes also obey the last one. But Blake shows that minimum entropy production holds only approximately — with the approximation being good for steady states that are near equilibrium!
However, he shows that another minimum principle holds exactly, even for steady states that are far from equilibrium. He calls this the 'principle of minimum dissipation'.
We actually discussed the principle of minimum dissipation in an earlier paper:
But one advantage of Blake's new paper is that it presents the results with a minimum of category theory. Of course I love category theory, and I think it's the right way to formalize open systems, but it can be intimidating.
Another good thing about Blake's new paper is that it explicitly compares the principle of minimum entropy to the principle of minimum dissipation. He shows they agree in a certain limit — namely, the limit where the system is close to equilibrium.
Let me explain this. I won't include the nice example from biology that Blake discusses: a very simple model of membrane transport. For that, read his paper! I'll just give the general results.
An open Markov process consists of a finite set $X$ of states, a subset $B \subseteq X$ of boundary states, and an infinitesimal stochastic operator $H: \mathbb{R}^X \to \mathbb{R}^X,$ meaning a linear operator with
$$ H_{ij} \geq 0 \ \ \text{for all} \ \ i \neq j $$and
$$ \sum_i H_{ij} = 0 \ \ \text{for all} \ \ j $$I'll explain these two conditions in a minute.
For each $i \in X$ we introduce a population $p_i \in [0,\infty).$ We call the resulting function $p : X \to [0,\infty)$ the population distribution. Populations evolve in time according to the open master equation:
$$ \displaystyle{ \frac{dp_i}{dt} = \sum_j H_{ij}p_j} \ \ \text{for all} \ \ i \in XB $$ $$ p_i(t) = b_i(t) \ \ \text{for all} \ \ i \in B $$So, the populations $p_i$ obey a linear differential equation at states $i$ that are not in the boundary, but they are specified 'by the user' to be chosen functions $b_i$ at the boundary states.
The offdiagonal entries $H_{ij}, \ i \neq j$ are the rates at which population hops from the $j$th to the $i$th state. This lets us understand the definition of an infinitesimal stochastic operator. The first condition:
$$ H_{ij} \geq 0 \ \ \text{for all} \ \ i \neq j $$says that the rate for population to transition from one state to another is nonnegative. The second:
$$ \sum_i H_{ij} = 0 \ \ \text{for all} \ \ j $$says that population is conserved, at least if there are no boundary states. Population can flow in or out at boundary states, since the master equation doesn't hold there.
A steady state is a solution of the open master equation that does not change with time. A steady state for a closed Markov process is typically called an equilibrium. So, an equilibrium obeys the master equation at all states, while for a steady state this may not be true at the boundary states. Again, the reason is that population can flow in or out at the boundary.
We say an equilibrium $q : X \to [0,\infty)$ of a Markov process is detailed balanced if the rate at which population flows from the $i$th state to the $j$th state is equal to the rate at which it flows from the $j$th state to the $i$th:
$$ H_{ji}q_i = H_{ij}q_j \ \ \text{for all} \ \ i,j \in X $$Suppose we've got an open Markov process that has a detailed balanced equilibrium $q$. Then a nonequilibrium steady state $p$ will minimize a function called the 'dissipation', subject to constraints on its boundary populations. There's a nice formula for the dissipation in terms of $p$ and $q.$
Definition. Given an open Markov process with detailed balanced equilibrium $q$ we define the dissipation for a population distribution $p$ to be
$$ \displaystyle{ D(p) = \frac{1}{2}\sum_{i,j} H_{ij}q_j \left( \frac{p_j}{q_j}  \frac{p_i}{q_i} \right)^2 } $$This formula is a bit tricky, but you'll notice it's quadratic in $p$ and it vanishes when $p = q.$ So, it's pretty nice.
Using this concept we can formulate a principle of minimum dissipation, and prove that nonequilibrium steady states obey this principle:
Definition. We say a population distribution $p: X \to \mathbb{R}$ obeys the principle of minimum dissipation with boundary population $b: X \to \mathbb{R}$ if $p$ minimizes $D(p)$ subject to the constraint that
$$ p_i = b_i \ \ \text{for all} \ \ i \in B $$Theorem 1. A population distribution $p$ is a steady state with $p_i = b_i$ for all boundary states $i$ if and only if $p$ obeys the principle of minimum dissipation with boundary population $b$.
Proof. This follows from Theorem 28 in A compositional framework for Markov processes.
How does dissipation compare with entropy production? To answer this, first we must ask: what really is entropy production? And: how does the equilibrium state $q$ show up in the concept of entropy production?
The relative entropy of two population distributions $p,q$ is given by
$$ \displaystyle{ I(p,q) = \sum_i p_i \ln \left( \frac{p_i}{q_i} \right) } $$It is well known that for a closed Markov process with $q$ as a detailed balanced equilibrium, the relative entropy is monotonically decreasing with time. This is due to an annoying sign convention in the definition of relative entropy: while entropy is typically increasing, relative entropy typically decreases. We could fix this by putting a minus sign in the above formula or giving this quantity $I(p,q)$ some other name. A lot of people call it the Kullback–Leibler divergence, but I have taken to calling it relative information. For more, see:
We say 'relative entropy' in the title, but then we explain why 'relative information' is a better name, and use that. More importantly, we explain why $I(p,q)$ has the physical meaning of free energy. Free energy tends to decrease, so everything is okay. For details, see Section 4.
Blake has a nice formula for how fast $I(p,q)$ decreases:
Theorem 2. Consider an open Markov process with $X$ as its set of states and $B$ as the set of boundary states. Suppose $p(t)$ obeys the open master equation and $q$ is a detailed balanced equilibrium. For any boundary state $i \in B,$ let
$$ \displaystyle{ \frac{Dp_i}{Dt} = \frac{dp_i}{dt}  \sum_{j \in X} H_{ij}p_j } $$measure how much $p_i$ fails to obey the master equation. Then we have
$$ \begin{array}{ccl} \displaystyle{ \frac{d}{dt} I(p(t),q) } &=& \displaystyle{ \sum_{i, j \in X} H_{ij} p_j \left( \ln(\frac{p_i}{q_i})  \frac{p_i q_j}{p_j q_i} \right)} \\ \\ && \; + \; \displaystyle{ \sum_{i \in B} \frac{\partial I}{\partial p_i} \frac{Dp_i}{Dt} } \end{array} $$Moreover, the first term is less than or equal to zero.
Proof. For a selfcontained proof, see Information geometry (part 15), which is coming up soon. It will be a special case of the theorems there. █Blake compares this result to previous work by Schnakenberg:
The negative of Blake's first term is this:
$$ \displaystyle{ K(p) =  \sum_{i, j \in X} H_{ij} p_j \left( \ln(\frac{p_i}{q_i})  \frac{p_i q_j}{p_j q_i} \right) } $$Under certain circumstances, this equals what Schnakenberg calls the entropy production. But a better name for this quantity might be free energy loss, since for a closed Markov process that's exactly what it is! In this case there are no boundary states, so the theorem above says $K(p)$ is the rate at which relative entropy — or in other words, free energy — decreases.
For an open Markov process, things are more complicated. The theorem above shows that free energy can also flow in or out at the boundary, thanks to the second term in the formula.
Anyway, the sensible thing is to compare a principle of 'minimum free energy loss' to the principle of minimum dissipation. The principle of minimum dissipation is true. How about the principle of minimum free energy loss? It turns out to be approximately true near equilibrium.
For this, consider the situation in which $p$ is near to the equilibrium distribution $q$ in the sense that
$$ \displaystyle{ \frac{p_i}{q_i} = 1 + \epsilon_i } $$for some small numbers $\epsilon_i.$ We collect these numbers in a vector called $\epsilon$.
Theorem 3. Consider an open Markov process with $X$ as its set of states and $B$ as the set of boundary states. Suppose $q$ is a detailed balanced equilibrium and let $p$ be arbitrary. Then
$$ K(p) = D(p) + O(\epsilon^2) $$where $K(p)$ is the free energy loss, $D(p)$ is the dissipation, $\epsilon_i$ is defined as above, and by $O(\epsilon^2)$ we mean a sum of terms of order $\epsilon_i^2.$
Proof. First take the free energy loss:
$$ \displaystyle{ K(p) = \sum_{i, j \in X} H_{ij} p_j \left( \ln(\frac{p_i}{q_i})  \frac{p_i q_j}{p_j q_i} \right)} $$Expanding the logarithm to first order in $\epsilon,$ we get
$$ \displaystyle{ K(p) = \sum_{i, j \in X} H_{ij} p_j \left( \frac{p_i}{q_i}  1  \frac{p_i q_j}{p_j q_i} \right) + O(\epsilon^2) } $$Since $H$ is infinitesimal stochastic, $\sum_i H_{ij} = 0,$ so the second term in the sum vanishes, leaving
$$ \displaystyle{ K(p) = \sum_{i, j \in X} H_{ij} p_j \left( \frac{p_i}{q_i}  \frac{p_i q_j}{p_j q_i} \right) \; + O(\epsilon^2) } $$or
$$ \displaystyle{ K(p) = \sum_{i, j \in X} \left( H_{ij} p_j \frac{p_i}{q_i}  H_{ij} q_j \frac{p_i}{q_i} \right) \; + O(\epsilon^2) } $$Since $q$ is a equilibrium we have $\sum_j H_{ij} q_j = 0,$ so now the last term in the sum vanishes, leaving
$$ \displaystyle{ K(p) = \sum_{i, j \in X} H_{ij} \frac{p_i p_j}{q_i} \; + O(\epsilon^2) } $$Next, take the dissipation
$$ \displaystyle{ D(p) = \frac{1}{2}\sum_{i,j} H_{ij}q_j \left( \frac{p_j}{q_j}  \frac{p_i}{q_i} \right)^2 } $$and expand the square, getting
$$ \displaystyle{ D(p) = \frac{1}{2}\sum_{i,j} H_{ij}q_j \left( \frac{p_j^2}{q_j^2}  2\frac{p_i p_j}{q_i q_j} + \frac{p_i^2}{q_i^2} \right) } $$Since $H$ is infinitesimal stochastic, $\sum_i H_{ij} = 0.$ The first term is just this times a function of $j,$ summed over $j,$ so it vanishes, leaving
$$ \displaystyle{ D(p) = \frac{1}{2}\sum_{i,j} H_{ij}q_j \left( 2\frac{p_i p_j}{q_i q_j} + \frac{p_i^2}{q_i^2} \right) } $$Since $q$ is an equilibrium, $\sum_j H_{ij} q_j = 0.$ The last term above is this times a function of $i,$ summed over $i,$ so it vanishes, leaving
$$ \displaystyle{ D(p) =  \sum_{i,j} H_{ij}q_j \frac{p_i p_j}{q_i q_j} =  \sum_{i,j} H_{ij} \frac{p_i p_j}{q_i} } $$This matches what we got for $K(p),$ up to terms of order $O(\epsilon^2).$ █
In short: detailed balanced open Markov processes are governed by the principle of minimum dissipation, not minimum entropy production. Minimum dissipation agrees with minimum entropy production only near equilibrium.
You can read a discussion of this article on Azimuth, and make your own comments or ask questions there!
