
Suppose we have a 2player normal form game. As usual, we assume:
• Player A has some set of choices \( i = 1, \dots, m.\)
• Player B has some set of choices \( j = 1, \dots, n.\)
• If player A makes choice \( i\) and player B makes choice \( j,\) the payoff to player A is \( A_{ij}\) and the payoff to player B is \( B_{ij}.\)
Earlier we studied 'pure strategies', where the players make the same choice each time. Now we'll study 'mixed strategies', where the players make their choices randomly. I want to show you that there's always a Nash equilibrium when we allow mixed strategies—even in games like rock, paper, scissors that don't have a Nash equilibrium with pure strategies!
But to do this, we need to define Nash equilibria for mixed strategies. And before that, we need to define mixed strategies!
First let's make up a name for the set of player A's choices:
and a name for the set of player B's choices:
Definition 1. A mixed strategy for player A is a probability distribution \( p\) on the set of their choices, \( X.\) A mixed strategy for player B is a probability distribution \( q\) on the set of their choices, \( Y.\)
Let's recall exactly what this means, since you'll need to know! Player A has a probability \( p_i\) of making any choice \( i = 1, \dots, m\), and these probabilities obey
and
Similarly, the probability that player B makes the choice \( j = 1, \dots, n\) is \( q_j,\) and these probabilities obey
and
In our earlier discussions of probability, we would call \( X\) and \( Y\) sets of outcomes. An outcome is anything that can happen. But now the thing that can happen is that a player makes a certain choice in the game! So, now we're calling \( X\) and \( Y\) sets of choices. But you can also think of these choices as outcomes.
Now let's work out the expected value of the payoff to each player. To do this, we'll assume:
1) Player A uses mixed strategy \( p.\) 2) Player B uses mixed strategy \( q.\) 3) Player A and player B's choices are independent.
If you forget what 'independent' means, look at Part 8. The basic idea is player A's choice doesn't affect player B's choice, and vice versa. After all, this is a 'simultaneous game', where each player makes their choice not knowing what the other has done.
But mathematically, the point is that we must assume the player's choices are independent to know the probability of player A making choice \( i\) and player B making choice \( j\) is the product
Knowing this, we can work out the expected value of the payoff to player A. Here it is:
I hope you see why. The probability that player A makes choice \( i\) and player B makes choice \( j\) is \( p_i q_j.\) The payoff to player A when this happens is \( A_{ij}.\) We multiply these and sum over all the possible choices for both players. That's how expected values work!
Similarly, the expected value of the payoff for player B is
If you're in the mood, I can make this more formal. Remember that \( X \times Y\) is the set of all ordered pairs \( (i,j)\) where \( i \in X\) and \( j \in Y.\) A pair \( (i,j)\) is an outcome where player A makes choice \( i\) and player B makes choice \( j.\) A's payoff is a function on this set \( X \times Y.\) Namely, if player A makes choice \( i\) and player B makes choice \( j,\) A's payoff is \( A_{ij}\). There's also a probability distribution on \( X \times Y.\) Namely, the probability of the outcome \( (i,j)\) is \( p_i \, q_j.\) So, the expected value of the payoff with respect to this probability distribution is
But this is equal to what we've seen already:
It looks like all of the students in this class have studied some linear algebra. So, I'll assume you know how to:
• take the dot product of vectors to get a number,
and
• multiply a vector by a matrix to get a new vector.
Click on the links if you want to review these concepts. They will let us write our formulas for expected payoffs much more efficiently!
Here's how. First, we think of the probability distribution \( p\) as a vector in \( \mathbb{R}^m;\) that is, a list of \( m\) numbers:
Second, we think of the probability distribution \( q\) as a vector in \( \mathbb{R}^n:\)
Third, we think of \( A\) and \( B\) as \( m \times n\) matrices, since that's what they are:
Here's the cool part:
Theorem. If A has mixed strategy \( p\) and B has mixed strategy \( q,\) then the expected value of A's payoff is
and the expected value of B's payoff is
Proof. We'll only prove the first one, since the second works just the same way. By definition, \( A q\) is a vector in \( \mathbb{R}^m\) with components
Also by definition, the dot product of \( p\) with \( Aq\) is the number
But this agrees with our earlier formula for the expected value of A's payoff, namely
So, we're done! █
It's not just quicker to write
than
It will also let us use tools from linear algebra to study games!
We'll have to look at examples to understand this stuff better, but let me charge ahead and define 'Nash equilibria' for mixed strategies. The idea is similar to the idea we've already seen. A pair of mixed strategies, one for A and one for B, is a Nash equilibrium if neither player can improve the expected value of their payoff by unilaterally changing their mixed strategy.
Let's make that precise. As before, the definition of Nash equilibrium involves two conditions:
Definition. Given a 2player normal form game, a pair of mixed strategies \( (p,q),\) one for player A and one for player B, is a Nash equilibrium if:
1) For all mixed strategies \( p'\) for player A, \( p' \cdot A q \le p \cdot A q.\)
2) For all mixed strategies \( q'\) for player B, \( p \cdot B q' \le p \cdot B q.\)
Condition 1) says that player A can’t improve their payoff by switching their mixed strategy from \( p\) to any other mixed strategy \( p'.\) Condition 2) says that player B can’t improve their payoff by switching their mixed strategy from \( q\) to any other mixed strategy \( q'.\)
Now, one of our goals is to prove this big wonderful theorem:
Theorem. For any 2player normal form game, there exists a pair of mixed strategies \( (p,q),\) one for player A and one for player B, that is a Nash equilibrium.
For zerosum games this was proved by John von Neumann. Later John Nash proved a version for nonzerosum games, and even games with more than two players. These are famously smart mathematicians, so you should not expect the proof to be extremely easy. We will need to work!
But next time we'll start by looking at examples, to get our feet on the ground and start getting some intuition for these new ideas.
You can also read comments on Azimuth, and make your own comments or ask questions there!
