Game Theory (Part 6)

Game Theory (Part 6)

John Baez

We've been looking at games where each player gets a payoff depending on the choice that both players make. The payoff is a real number, which I often call the number of points. When we play these games in class, these points go toward your grade. 10% of your grade depends on the the total number of points you earn in quizzes and games. But what do these points mean in other games, like Prisoner's Dilemma or Battle of the Sexes?

This leads us into some very interesting and deep questions. Let's take a very quick look at them, without getting very dep.

Maximizing the payoff

The main thing is this. When we're studying games, we'll assume each player's goal is to earn as many points as possible. In other words, they are trying to maximize their payoff.

They are not, for example, trying to make their payoff bigger than the other player's payoff. Indeed, in class you should not be trying to earn more points than me! One student said he was trying to do that. That's a mistake. You should be happier if

• you get 10 points and I get 20

than if

• you get -10 points and I get -20.

After all, it's only your total number of points that affects your grade, not whether it's bigger than mine.

So, you should always try to maximize your payoff. And I promise to do the same thing: I'll always try to maximize my payoff. You have to take my word on this, since my salary is not affected by my payoff! But I want to make your task very clear: you are trying to maximize your payoff, and you can assume I am trying to maximize mine.

(If I were doing something else, like sadistically trying to minimize your payoff, that would affect your decisions!)

Rational agents and utility

We can't understand how people actually play games unless we know what they are trying to do. In real life, people's motives are very complicated and sometimes mysterious. But in mathematical game theory, we start by studying simpler: rational agents. Roughly speaking, a rational agent is defined to be a person or animal or computer program or something that is doing the best possible job of maximizing some quantity, given the information they have. We call this quantity their payoff.

This is a rough definition, which we will try to improve later.

You shouldn't be fooled by the positive connotations of the word 'rational'. We're using it in a very specific technical way here. A madman in a movie theater who is trying to kill as many people as possible counts as 'rational' by our definition if they maximize the number of people killed, given the information they have.

The whole question of what really should count as 'rationality' is a very deep one. People have a lot of interesting ideas about it:

Rationality, Wikipedia.

Utility

So: we say a rational agent does the best possible job of maximizing their payoff given the information they have. But in economics, this payoff is often called utility.

That's an odd word, but comes from a moral philosophy called utilitarianism, which says—very roughly—that the goal of life is to maximize happiness. Perhaps because it's a bit embarrassing to talk about maximizing happiness, these philosophers called it 'utility'.

But be careful: while the moral philosophers often talk about agents trying to maximize the total utility of everyone, economists focus on rational agents trying to maximize their own utility.

This sounds very selfish. But it's not necessarily. If you want other people to be happy, your utility depends on their utility. If you were a complete altruist, perhaps maximizing your utility would even be the same as maximizing the total utility of everyone!

Again, there are many deep problems here, which I won't discuss. I'll just mention one: in practice, it's very hard to define utility in a way that's precise enough to measure, much less add up! See here for a bit more:

Utility, Wikipedia.

Utilitarianism, Wikipedia.

The assumption of mutual rationality

Game theory is simplest when

all players are rational agents,

and

each player knows all the other players are rational agents.

Of course, in the real world nobody is rational all the time, so things get much more complicated. If you're playing against an irrational agent, you have to work harder to guess what they are going to do!

But in the games we play in class, I will try to be a rational agent: I will try my best to maximize my payoff. And you too should try to be a rational agent, and maximize your payoff—since that will help your grade. And you can assume I am a rational agent. And I will assume you are a rational agent.

So: I know that if I keep making the same choice, you will make the choice that maximizes your payoff given what I do.

And: you know that if you keep making the same choice, I will make the choice that maximizes my payoff given what you do.

Given this, we should both seek a Nash equilibrium. I won't try to state this precisely and prove it as a theorem... but I hope it's believable. You can see some theorems about this here:

• Robert Aumann and Adam Brandenburger, Epistemic conditions for Nash equilibrium.

Probabilities

All this is fine if a Nash equilibrium exists and is unique. But we've seen that in some games, a Nash equilibrium doesn't exist — at least not, if we only consider pure strategies, where each player makes the same choice every time. And in other games, the Nash equilibrium exists but there is more than one.

In games like this, saying that players will try to find a Nash equilibrium doesn't settle all our questions! What should they do if there's none, or more than one?

We've seen one example: rock-paper-scissors. If we only consider pure strategies, this game has no Nash equilibrium. But I've already suggested the solution to this problem. The players should use mixed strategies, where they randomly make different choices with different probabilities.

So, to make progress, we'll need to learn a bit of probability theory! That'll be our next topic.


You can also read comments on Azimuth, and make your own comments or ask questions there!


© 2013 John Baez
baez@math.removethis.ucr.andthis.edu
home