|
SURPRISE: it's called SURPRISAL!
This is a well-known concept in information theory. It's also called 'information content'.
Let's see why. First, let's remember the setup. We have a manifold
$$ \displaystyle{ Q = \{ q \in \mathbb{R}^n : \; q_i > 0, \; \sum_{i=1}^n q_i = 1 \} } $$
whose points \(q\) are nowhere vanishing probability distributions on the set \( \{1, \dots, n\}\). We have a function
$$ f \colon Q \to \mathbb{R} $$
called the Shannon entropy, defined by
$$ \displaystyle{ f(q) = - \sum_{j = 1}^n q_j \ln q_j } $$
For each point \(q \in Q\) we define a cotangent vector \( p \in T^\ast_q Q\) by
$$ p = (df)_q $$
As mentioned last time, this is the analogue of momentum in probability theory. In the second half of this post I'll say more about exactly why. But first let's compute it and see what it actually equals!
Let's start with a naive calculation, acting as if the probabilities \( q_1, \dots, q_n\) were a coordinate system on the manifold \( Q\). We get
$$ \displaystyle{ p_i = \frac{\partial f}{\partial q_i} }$$
so using the definition of the Shannon entropy we have
$$ \begin{array}{ccl} p_i &=& \displaystyle{ -\frac{\partial}{\partial q_i} \sum_{j = 1}^n q_j \ln q_j }\\ \\ &=& \displaystyle{ -\frac{\partial}{\partial q_i} \left( q_i \ln q_i \right) } \\ \\ &=& -\ln(q_i) - 1 \end{array} $$
Now, the quantity \( -\ln q_i\) is called the surprisal of the probability distribution at \( i\). Intuitively, it's a measure of how surprised you should be if an event of probability \( q_i\) occurs. For example, if you flip a fair coin and it lands heads up, your surprisal is ln 2. If you flip 100 fair coins and they all land heads up, your surprisal is 100 times ln 2.
Of course 'surprise' is a psychological term, not a term from math or physics, so we shouldn't take it too seriously here. We can derive the concept of surprisal from three axioms:
It follows from work on Cauchy's functional equation that \( F\) must be of this form:
$$ F(q) = - \log_b q $$
for some constant \( b > 1\). We shall choose \( b\), the base of our logarithms, to be \( e\). We had a similar freedom of choice in defining the Shannon entropy, and we will use base \( e\) for both to be consistent. If we chose something else, it would change the surprisal and the Shannon entropy by the same constant factor.
So far, so good. But what about the irksome "-1" in our formula?
$$ p_i = -\ln(q_i) - 1 $$
Luckily it turns out we can just get rid of this! The reason is that the probabilities \( q_i\) are not really coordinates on the manifold \( Q\). They're not independent: they must sum to 1. So, when we change them a little, the sum of their changes must vanish. Putting it more technically, the tangent space \( T_q Q\) is not all of \( \mathbb{R}^n\), but just the subspace consisting of vectors whose components sum to zero:
$$ \displaystyle{ T_q Q = \{ v \in \mathbb{R}^n : \; \sum_{j = 1}^n v_j = 0 \} }$$
The cotangent space is the dual of the tangent space. The dual of a subspace
$$ S \subseteq V$$
is the quotient space
$$ V^\ast/\{ \ell \colon V \to \mathbb{R} : \; \forall v \in S \; \, \ell(v) = 0 \} $$
The cotangent space \( T_q^\ast Q\) thus consists of linear functionals \( \ell \colon \mathbb{R}^n \to \mathbb{R}\) modulo those that vanish on vectors \( v\) obeying the equation
$$ \displaystyle{ \sum_{j = 1}^n v_j = 0 } $$
Of course, we can identify the dual of \( \mathbb{R}^n\) with \( \mathbb{R}^n\) in the usual way, using the Euclidean inner product: a vector \( u \in \mathbb{R}^n\) corresponds to the linear functional
$$ \displaystyle{ \ell(v) = \sum_{j = 1}^n u_j v_j } $$
From this, you can see that a linear functional \( \ell\) vanishes on all vectors \( v\) obeying the equation
$$ \displaystyle{ \sum_{j = 1}^n v_j = 0 } $$
if and only if its corresponding vector \( u\) has
$$ u_1 = \cdots = u_n $$
So, we get
$$ T^\ast_q Q \cong \mathbb{R}^n/\{ u \in \mathbb{R}^n : \; u_1 = \cdots = u_n \}$$
In words: we can describe cotangent vectors to \( Q\) as lists of n numbers if we want, but we have to remember that adding the same constant to each number in the list doesn't change the cotangent vector!
This suggests that our naive formula
$$ p_i = \ln(q_i) - 1 $$
is on the right track, but we're free to get rid of the constant 1 if we want! And that's true.
To check this rigorously, we need to show
$$ \displaystyle{ p(v) = -\sum_{j=1}^n \ln(q_i) v_i} $$
for all \( v \in T_q Q\). We compute:
$$ \begin{array}{ccl} p(v) &=& df(v) \\ \\ &=& v(f) \\ \\ &=& \displaystyle{ \sum_{j=1}^n v_j \, \frac{\partial f}{\partial q_j} } \\ \\ &=& \displaystyle{ \sum_{j=1}^n v_j (-\ln(q_i) - 1) } \\ \\ &=& \displaystyle{ -\sum_{j=1}^n \ln(q_i) v_i } \end{array} $$
where in the second to last step we used our earlier calculation:
$$ \displaystyle{ \frac{\partial f}{\partial q_i} = -\frac{\partial}{\partial q_i} \sum_{j = 1}^n q_j \ln q_j = -\ln(q_i) - 1 } $$
and in the last step we used
$$ \displaystyle{ \sum_j v_j = 0 } $$
Now let's take stock of where we are. We can fill in the question marks in the charts from last time, and combine those charts while we're at it.
Classical Mechanics | Thermodynamics | Probability Theory | |
q | position | extensive variables | probabilities |
p | momentum | intensive variables | surprisals |
S | action | entropy | Shannon entropy |
What's going on here? In classical mechanics, action is minimized (or at least the system finds a critical point of the action). In thermodynamics, entropy is maximized. In the maximum entropy approach to probability, Shannon entropy is maximized. This leads to a mathematical analogy that's quite precise. For classical mechanics and thermodynamics, I explained it here:
These posts may give a more approachable introduction to what I'm doing now: now I'm bringing probability theory into the analogy, with a big emphasis on symplectic and contact geometry.
Let me spell out a bit of the analogy more carefully:
$$ f \colon Q \to \mathbb{R}$$
What's this? It's basically action: \( f(q)\) is the action of the least-action path from the position \( q_0\) at some earlier time \( t_0\) to the position \( q\) at time 0. The Hamilton--Jacobi equations say the particle's momentum \( p\) at time 0 is given by
$$ p = (df)_q $$
$$ f \colon Q \to \mathbb{R}$$
There is a cotangent vector \( p\) at the point \( q\) given by
$$ p = (df)_q $$
The components of this vector are the intensive variables corresponding to the extensive variables.
Probability theory. In probability theory, we have a manifold \( Q\) whose points are nowhere vanishing probability distributions on a finite set. The coordinates of a point \( q \in Q\) are probabilities. There's an important function on this manifold: the Shannon entropy
$$ f \colon Q \to \mathbb{R}$$
There is a cotangent vector \( p\) at the point \( q\) given by
$$ p = (df)_q $$
The components of this vector are the surprisals corresponding to the probabilities.
In all three cases, \( T^\ast Q\) is a symplectic manifold and imposing the constraint \( p = (df)_q\) picks out a Lagrangian submanifold
$$ \Lambda = \{ (q,p) \in T^\ast Q: \; p = (df)_q \} $$
There is also a contact manifold \( T^\ast Q \times \mathbb{R}\) where the extra dimension comes with an extra coordinate \( S\) that means
We can then decree that \( S = f(q)\) along with \( p = (df)_q\), and these constraints pick out a Legendrian submanifold
$$ \Sigma = \{(q,p,S) \in T^\ast Q \times \mathbb{R} : \; S = f(q), \; p = (df)_q \} $$
There's a lot more to do with these ideas, and I'll continue next time.
You can read a discussion of this article on Azimuth, and make your own comments or ask questions there!
|