Random Permutations (Part 5)

November 30, 2019

Random Permutations (Part 5)

John Baez

To go further in understanding the properties of random permutations, we need to use 'bivariate' generating functions — that is, generating functions involving 2 variables. Here's a good introduction to these:

I will try to explain these in a way that takes advantage of Joyal's work on species. Then I'll use them to tackle this puzzle from Part 3:

Puzzle 5. What is the expected number of cycles in a random permutation of an \(n\)-element set?

There are two main kinds of generating functions in combinatorics, which apply to functors from two different groupoids to \(\textrm{FinSet}\):

Why is there an \(n!\) in the denominator in the first case but not the second? It comes from the theory of groupoid cardinality; using this theory we could define generating functions for functors \(G \colon X \to \textrm{FinSet}\) for any groupoid \(X\). But we only need these two cases so I'll resist the temptation to explain such generalities.

I've spent a lot more time thinking about functors \(G \colon S \to \textrm{FinSet}\) and their exponential generating functions than functors \(G \colon \mathbb{N} \to \textrm{FinSet}\) and their ordinary generating functions. The former are quite rich and interesting; the latter seem a bit pathetic in comparison. A functor \(G \colon \mathbb{N} \to \textrm{FinSet}\) is determined up to isomorphism by the cardinality of \(G(n)\) for each \(n\), so it's really just a function from the natural numbers to itself. Still, converting it into a formal power series — its ordinary generating function — is a useful trick for solving all sorts of problems. So we shouldn't scorn the ordinary generating functions. Furthermore, we can restrict any functor \(S \to \textrm{FinSet}\) to a functor from \(\mathbb{N} \to \textrm{FinSet}\), or left Kan extend any functor \(\mathbb{N} \to \textrm{FinSet}\) to a functor \(S \to \textrm{FinSet}\), and this sets up some nice relations between exponential and ordinary generating functions.

The fun really starts when we combine the two ideas. Suppose we have a functor

\[ G \colon S \times \mathbb{N} \to \textrm{FinSet} \]

Then we can define its generating function \(|G|\), which is a formal power series in two variables \(z\) and \(u\), by

\[ \displaystyle{ |G|(z,u) = \sum_{n \ge 0} \sum_{k \ge 0} |G(n,k)| \, \frac{z^n}{n!} u^k } \]

This is called a bivariate generating function. All the usual tricks with generating functions work for this kind too. The systematic mathematician in me wants to explain all these tricks, but that would take a while. So with no further ado let me just illustrate them by using them to answer a question:

Puzzle 5. What is the expected number of cycles in a random permutation of an \(n\)-element set?

This is Example III.4 in Analytic Combinatorics, but I'll explain the method from scratch.

We start by making up a functor

\[ G \colon S \times \mathbb{N} \to \textrm{FinSet} \]

that assigns to the pair \((n,k)\) the set of permutations of \(n\) with \(k\) cycles. Note that we're treating \(n\) as a finite set but \(k\) as a natural number.

What is the generating function \(|G|\)? To answer that it's best to make up a functor

\[ G_k \colon S \to \textrm{FinSet} \]

that assigns to each finite set \(n\) the set of permutations of \(n\) with \(k\) cycles. We have

\[ G(n,k) \cong G_k(n) \]

so

\[ \begin{array}{ccl} |G|(z,u) &=& \displaystyle{ \sum_{n \ge 0} \sum_{k \ge 0} |G(n,k)| \, \frac{z^n}{n!} u^k } \\ \\ &=& \displaystyle{ \sum_{k \ge 0} |G_k|(z) \; u^k } \end{array} \]

Thus, to figure out \(|G|\) we just need to know \(|G_k|\) for each \(k\).

To do this, we'll write \(G_k\) as a composite of simpler species. As mentioned in Part 2, we can compose species: a \(G \circ H\) structure on \(X\) is a way of writing \(X\) as a union of disjoint parts, putting a \(G\)-structure on the set of parts, and putting an \(H\)-structure on each part. Furthermore, we have

\[ |G \circ H| = |G| \circ |H| \]

There is a species \(\textrm{Exp}_k\), 'being a \(k\)-element set', defined by setting \(\textrm{Exp}_k(X) = 1\) if \(X\) has \(k\) elements and \(\mathrm{Exp}_k(X) = \emptyset\) otherwise. We have

\[ |\textrm{Exp}_k|(x) = \frac{x^k}{k!} \]

If \(\textrm{Cyc} \colon S \to \textrm{FinSet}\) is the species of cyclic orderings, \(\textrm{Exp}_k \circ \textrm{Cyc}\)-structure on a finite set is a way to chop it up into \(k\) unlabeled pieces and put a cyclic ordering on each piece. But this is the same as a permutation of that set with \(k\) cycles! Thus \[ G_k \cong \textrm{Exp}_k \circ \textrm{Cyc} \] and it follows that \[ \displaystyle{ |G_k| = \frac{|\textrm{Cyc}|^k}{k!} } \]

But the generating function for cyclic orderings is famous: there are \((n-1)!\) cyclic orderings on an \(n\)-element set, so

\[ \begin{array}{ccl} |\textrm{Cyc}|(z) &=& \displaystyle{ \sum_{n \ge 0} |\textrm{Cyc}(n)| \frac{z^n}{n!} } \\ \\ &=& \displaystyle{ \sum_{n \ge 0} \frac{z^n}{n} } \\ \\ &=& -\ln(1-z) \end{array} \]

Thus, we get

\[ \displaystyle{ |G_k| = \frac{1}{k!} (-\ln(1-z))^k } \]

This is fairly complicated: if we actually wrote this out as a power series, the coefficients would involve Stirling numbers of the first kind. That shouldn't be surprising, since these numbers count the permutations of \(n\) having \(k\) cycles. But it's easier to work with \(|G|\), which is why this 'bivariate generating function' idea is a good thing. Let's see why:

\[ \begin{array}{ccl} |G|(z,k) &=& \displaystyle{ \sum_{k \ge 0} |G_k|(z) \;u^k } \\ \\ &=& \displaystyle{ \sum_{k \ge 0} \frac{1}{k!} (-\ln(1-z))^k u^k } \\ \\ &=& \exp\left( -u\ln(1-z) \right) \\ \\ &=& (1-z)^{-u} \end{array} \]

See? It's much more pretty to combine all the \(|G_k|\) into a single \(|G|\) this way.

The function \(|G|\) is not exactly what we need to solve our puzzle, but we're very close now. We want to know the average number of cycles of a random permutation of a \(n\)-element set. Call this \(A(n)\). Since there are \(|G(n,k)|\) permutations with \(k\) cycles, we have

\[ \displaystyle{ a_n = \sum_{k \ge 0} k \frac{|G(n,k)|}{n!} } \]

But notice, this is something we can compute using \(|G|\), since

\[ \begin{array}{ccl} \displaystyle{\left. \frac{\partial}{\partial u} |G|(z,u) \right|_{u = 1} } &=& \displaystyle{ \left. \frac{\partial}{\partial u} \sum_{n \ge 0} \sum_{k \ge 0} |G(n,k)| \, \frac{z^n}{n!} u^k \right|_{u = 1} } \\ \\ &=& \displaystyle{ \sum_{n \ge 0} \sum_{k \ge 0} k |G(n,k)| \, \frac{z^n}{n!} } \\ \\ &=& \displaystyle{ \sum_{n \ge 0} a_n \, z^n } \end{array} \]

So, to work out the average \(a_n\) we just differentiate \(|G|\) with respect to \(u\), set \(u = 1\), and work out the coefficients of its Taylor series as a function of \(z\). Let's do it!

We saw

\[ \displaystyle{ |G|(z,u) = (1-z)^{-u} } \]

For those whose calculus is rusty this could be the hardest step in the whole computation:

\[ \displaystyle{ \frac{\partial}{\partial u} |G|(z,u) = -\ln(1-z) \, (1-z)^{-u} } \]

This gives

\[ \displaystyle{ \left. \frac{\partial}{\partial u} |G|(z,u) \right|_{u =1} = -\frac{\ln(1-z)}{1-z} } \]

Now we just need to work out the Taylor series of this function. I take it back: this is the hardest step. We have

\[ -\frac{\ln(1-z)}{1-z} = \sum_{n \ge 1} H_n z^n \]

where \(H_n\) is the harmonic number

\[ H_n = \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} \]

I'll show this in a minute. Given this, we get

\[ a_n = H_n \]

Or, in something more like plain English: the average number of cycles in a random permutation of an \(n\)-element set is

\[ \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} \]

Asymptotically this is just \(\ln n\). For a better estimate, use \(\ln n + \gamma\), where \(\gamma\) is Euler's constant.

So, we're done! This offers more evidence that a random permutation of a huge set has rather few, mostly rather large cycles. For example, suppose we take a random permutation of a set with a million elements. Then our calculation show that on average it has only about 14 cycles. From last time, we know that the median size of the largest cycle is roughly 606,000, while its mean size is roughly 624,000.

But how do we do that last step in our computation — the one that led to the harmonic numbers? For this, note that if we multiply any formal power series

\[ f(z)= \sum_{n=0}^\infty a_n z^n\]

by the geometric series

\[ \frac{1}{1-z} = \sum_{n=0}^\infty z^n \]

we get a series whose coefficients are the partial sums of the sequence \(a_n\):

\[ \frac{1}{1-z} f(z) = \sum_{n=0}^\infty \left(\sum_{i=0}^n a_i \right) z^n \]

Since the power series for \(-\ln(1-z)\) has coefficients \(a_n = 1/n\) starting at \(n = 1\), when we multiply it by \(1/(1-z)\) we get a power series whose coefficients are the harmonic numbers!

Some readers may be disappointed at how a calculation that started so nobly with functors devolved into some clever tricks with calculus. Others may feel the opposite way.

Personally I like the combination of functors and calculus. I'd like to get more of the calculations to proceed at the level of functors from \(S \times \mathbb{N}\) to \(\textrm{FinSet}\), or even from \(S \times S\) to \(\textrm{FinSet}\). But the main 'trick' here, the method of computing the mean of a family of probability distributions by differentiating a bivariate generating function with respect to \(u\) and then setting \(u = 1\), is very pretty. And it's not a one-off: it's quite generally useful. See Analytic Combinatorics for more examples!


You can also read comments on the n-Category Café, and make your own comments or ask questions there!


© 2019 John Baez
baez@math.removethis.ucr.andthis.edu
home