
Okay, we're almost done! We've been studying Nash equilibria for zerosum 2player normal form games. We proved a lot of things about them, but now we'll wrap up the story by proving this:
Grand Theorem. For every zerosum 2player normalform game, a Nash equilibrium exists. Moreover, a pair of mixed strategies \( (p,q)\) for the two players is a Nash equilibrium if and only if each strategy is a maximin strategy.
Let's remember what we've proved in Part 16 and Part 18:
Theorem 1. For any zerosum 2player normal form game,
Theorem 2. Given a zerosum 2player normal form game for which a Nash equilibrium exists, we have
Theorem 3. If \( (p,q)\) is a Nash equilibrium for a zerosum 2player normalform game, then \( p\) is a maximin strategy for player A and \( q\) is a maximin strategy for player B.
Theorem 4. Suppose we have a zerosum 2player normal form game for which ★ holds. If \( p\) is a maximin strategy for player A and \( q\) is a maximin strategy for player B, then \( (p,q)\) is a Nash equilibrium.
Today we'll prove two more results. The first one is easy if you know some topology. The second one is the real heart of the whole subject:
Theorem 5. For every zerosum 2player normalform game, a maximin strategy exists for each player.
Theorem 6. For every zerosum 2player normalform game, ★ holds.
Putting all these results together, it's easy to get our final result:
Grand Theorem. For every zerosum 2player normalform game, a Nash equilibrium exists. Moreover, a pair of mixed strategies \( (p,q)\) for the two players is a Nash equilibrium if and only if each strategy is a maximin strategy.
Proof. By Theorem 6 we know that ★ holds. By Theorem 5 we know that there exist maximin strategies for each player, say \( p\) and \( q.\). Theorem 4 says that if \( p\) and \( q\) are maximin strategies and ★ holds, then \( (p,q)\) is a Nash equilibrium. So, a Nash equilibrium exists.
Moreover, if \( (p,q)\) is any Nash equilibrium, Theorem 3 says \( p\) and \( q\) are maximin strategies. Conversely, since ★ holds, Theorem 4 says that if \( p\) and \( q\) are maximin strategies, \( (p,q)\) is a Nash equilibrium. █
Okay, let's dive in and get to work:
Theorem 5. For every zerosum 2player normalform game, a maximin strategy exists for each player.
Proof. We'll prove this only for player A, since the proof for player B is similar. Remember that a maximin strategy for player A is a mixed strategy that maximizes A's security level, which is a function
So, we just need to show that this function \( f\) really has a maximum. To do this, we note that
is a continuous function defined on a compact set. As mentioned at the start of Part 17, this guarantees that \( f\) has a maximum. █
I apologize if this proof is hard to understand. All this stuff is standard if you know some topology, and a huge digression if you don't, so I won't go through the details. This is a nice example of how topology can be useful in other subjects!
Now we finally reach the heart of the whole subject: von Neumann's minimax theorem. Our proof will be a condensed version of the one in Andrew Colman's 1982 book Game Theory and its Applications in the Social and Biological Sciences.
Theorem 6. For every zerosum 2player normalform game,
holds.
Proof. Let's write
and
Our goal is to prove ★, which says \( V = W.\) By Theorem 1 we know
So, we just need to prove
Here's how we will do this. We will prove
Since we'll prove this for any game of the sort we're studying, it'll be true even if we add some real number \( c\) to each entry of the payoff matrix \( A_{ij}.\) Doing this adds \( c\) to the expected payoff \( p' \cdot A q',\) so it adds \( c\) to \( V\) and \( W.\) So, it will follow that
for any real number \( c,\) and this implies
So, let's get going.
Assume \( W > 0.\) To prove that \( V \ge 0\), remember that
To show this is greater than or equal to zero, we just need to find some mixed strategy \( p\) for player A such that
In other words, we need to find \( p\) such that
for all mixed strategies \( q'\) for player B.
How can we find \( p\) for which this ★★ is true? The key is to consider the set
For example, if
then \( C\) looks like this:
Since \( W > 0\), for any \( Aq' \in C\) we have
so there must exist \( p'\) with
Since \( p' = (p'_1, \dots, p'_m)\) is a mixed strategy, we have \( p'_i \ge 0\) for all \( 1 \le i \le m.\) But since we've just seen
at least one of the numbers \( (Aq')_i\) must be positive. In other words, if we define a set \( N\) by
then \( Aq'\) can't be in this set:
So, we've seen that no point in \( C\) can be in \( N\):
Here's what it looks like in our example:
Now the trick is to:
• let \( r\) be a point in \( N\) that's as close as possible to \( C,\)
and
• let \( s\) be a point in \( C\) that's as close as possible to \( r,\)
We need to use a bit of topology to be sure these points exist, since it means finding the minima of certain functions (namely, distances). But let's not worry about that now! We'll complete the proof with two lemmas:
Lemma 1. \( r \cdot (sr) = 0,\) \( s_i  r_i \ge 0\) for all \( i,\) and \( s_i  r_i > 0\) for at least one \( i.\)
Lemma 2. If \( Aq'\) is any point in \( C\), then
Here's what the points \( s\) and \( r\) and the vector \( s  r\) look like in our example:
Check to see that Lemmas 1 and 2 are true in this example! We'll prove the lemmas later; right now let's see how they get the job done.
First, by Lemma 1, the numbers \( s_i  r_i\) are nonnegative and at least one is positive. So, we can define a mixed strategy \( p\) for player A by defining
where \( c > 0\) is a number chosen to make sure \( \sum_i p_i = 1.\) (Remember, the probabilities \( p_i\) must be \( \ge 0\) and must sum to 1.) In other words,
Now, for any mixed strategy \( q'\) for player B, we have \( Aq' \in C\) and thus by Lemma 1
Dividing by \( c\), we get
for all \( q'.\) But this is ★★, which is what we wanted to prove! So we are done! █
I will give the proofs of Lemmas 1 and 2 in the next part.
You can also read comments on Azimuth, and make your own comments or ask questions there!
