Send As SMS

Sunday, June 25, 2006

Dawid on probabilities

A few days ago our reading group ran through Phil Dawid's Probability, Causality and the Empirical World: A Bayes-de Finetti-Popper-Borel Synthesis and learned that not only does probability not exist for him, like for de Finetti, nor does causality. A common characterisation of positions concerning probabilities split them into four groups:

1) Frequentist: probabilities are limiting frequencies of outcomes in sequences of events.
2) Propensity theorist: an individual event has a propensity to display a certain outcome.
3) Subjective Bayesian: a subjective degree of belief in an outcome.
4) Objective Bayesian: relative to specific background knowledge, there is an objective value an agent should accord to their degree of belief in an outcome.

Dawid (pronounced 'David') holds a Bayesian position, made evident in his involvement in the Sally Clark case, in which a mother was unjustly jailed after her two children died. But for Dawid, while the position that probabilities are consistent assignments of degrees of belief is all well and good, and it avoids the problems of taking probabilities to exist out there in the world, at some point you should want to calibrate the probabilities that an individual is spewing out. I could lock myself in a dark room keeping my degrees of belief consistent, but if I have England as 99% likely to win the World Cup, and other oddities, you will want to have a framework in which you can criticise me. This idea of calibration takes place in weather forecasting. You might score the forecaster's rain predicitions by forming the sum of (xi - yi)2, where xi = 0 if it is dry on the ith day, and 1 if it rains, and yi is the forecasted percentage chance of rain - the Brier score.

Intuitively, if a forecaster believes in their forecasts for rain they ought to be happy to accept bets made either for or against it raining at the odds they give. You'd think then that we could winkle out the bad from the good forecaster by noting which ones would go broke within a short space of time. The trouble is in telling when a 'good' forecaster's being very unlucky, and when a 'bad' forecaster's being very lucky. You are forced to appeal to an infinitely long series of predictions. And this is Dawid's position. Probabilities don't exist, they're theoretical tools that mediate between our theories and the world. The only way they hook onto the world is by what they rule out as impossible (this is the Borel part of the synthesis). For example, among other things, a forecaster who says 30% chance of rain every day is ruling out the possibility that in the long run it will rain on 40% of the days. This has the paradoxical consequence that if, say, two weather forecasters make predictions for rain over the next century and agree on every day except tomorrow when one says 10% and the other say 90%, there is no way you can say whether one was right. They've both ruled out various sets of sequences of outcomes, like those for which the average differs from the limit of the average of their probabilities. But neither rules out anything that the other doesn't.

Leaving aside the problem that runs aren't infinite, this game theoretic interpretation of probability theory is certainly very interesting. Shafer and Vovk have written a book-length treatment of the idea in their Probability and Finance: It's Only a Game!. Game theoretic ideas are also used to understand maximum entropy distributions. They correspond to stable points in zero-sum games played between a decision maker and nature. Flemming Topsoe has many good papers on this, and there's also one by Dawid with Peter Grunwald. Different entropies match up with different loss functions, such as the Brier score above. Something to add to the pot is Dawid's The geometry of proper scoring rules, a longer version of a paper written with Steffen Lauritzen. Now we have game theory blending with the differential geometric approach of Amari known as Information Geometry, discussed in earlier posts. I wish I could understand all this properly.

4 Comments:

Anonymous said...

"Intuitively, if a forecaster believes in their forecasts for rain they ought to be happy to accept bets made either for or against it raining at the odds they give."

It is important to keep in mind that betting-on-outcomes is an operationalization of the subjective bayesian view of probabilities. This fact has at least two consequences which bayesian statisticians often seem to ignore. Firstly, this operationalization involves a representation of reality, and any representation has assumptions which may or may not be applicable in particular cases. Secondly, it is not the only possible representation.

The philosopher Mark Colyvan, at the University of Queensland, has written on some of the assumptions implicit in the use of the standard probability theory axioms, starting with the assumption that reality can be represented by propositions.

Some questions that arise:
How do I operationalize my subjective probabilities if I have religious (or other) objections to betting? How do two bayesians combine their subjective probabilities in a coherent manner, if their underlying assumptions are different?



-- Peter

June 26, 2006 10:02 AM  
Anonymous said...

Further to my comment, another question: How am I to understand subjective probabilities if I have a pre-deteministic view of reality? It strikes me that among the other implicit assumptions of subjective bayesianism (and perhaps also the propensity theory view) is one that says the future is not already pre-determined.


-- Peter

June 26, 2006 10:11 AM  
david said...

...the assumption that reality can be represented by propositions.

As my agreement with Brandom suggests, I too see this as a problem. Propositional attitudes emerge out of our interaction with the world. They are not prior too it. On the other hand, some parts of our engagement with the world can be addressed in propositional form, to the extent that we may even be able to make wagers.

How do I operationalize my subjective probabilities if I have religious (or other) objections to betting?

Other problems with the betting set-up include the one that you need not desire £100000 ten times as much as £10000.

The betting idea is supposed to be taken somewhat figuratively. What you want is some ideal commodity which matches linearly with your desires. But you point to a larger problem that underlying all this is something akin to the subject of much modern economics, the utility-maximising individualist.

How do two bayesians combine their subjective probabilities in a coherent manner, if their underlying assumptions are different?

I don't see that they'd have to come to a common position. One would update her probabilities according to the information the other passed on, how much she trusted him, etc. They're not in the same situation.

How am I to understand subjective probabilities if I have a pre-deteministic view of reality?

A Bayesian can be a thoroughgoing determinist. I may know that given all the conditions of your throw of that die, the outcome is inevitable. But I don't know all these conditions. The Bayesian is in a much better position vis-a-vis dterminism than the propensity theorist or frequentist.

If you've read my work, you'll know I'm not a great believer in a mathematicized philosophy. My interest in this game theoretic version of probability theory is down to what I expect it might be able to do along the lines of the references I mentioned, especially in the context of machne learning.

June 26, 2006 2:23 PM  
walt said...

In subjective probability, you really can't distinguish between differences in evaluations of probability and differences in evaluation of the utility of money? What about Aumann-Anscomb?

June 27, 2006 8:31 PM  

Post a Comment

<< Home