Send As SMS

Friday, April 07, 2006

Natural distributions

I've been thinking of late about what we can say about the probability distributions we might expect the world to throw at us. This is very important in machine learning. An algorithm which, knowingly or not, 'expected' to meet with certain specific kinds of distribution which in fact did occur is clearly at an advantage. But what kinds of consideration are important here? The universality of the Gaussian normal distribution as made explicit in the Central Limit Theorem might provide a pointer. Are there other distributions which arise robustly in different situations?

In Universality for mathematical and physical systems, Percy Deift describes distributions which govern data as far removed as the spacings between parked cars and the (scaled) spacings between the zeros of the Riemann zeta function. These distributions are studied in a field known as random matrix theory, which considers, for example, the eigenvalues of a random orthogonal matrix. Answers are beginning to emerge as to why these distributions are encountered, which parallel the three components of the central limit theorem: a statistical component (take independent, identically distributed random variables, centered and scaled), an algebraic component (add the variables), and an analytic component (take the limit in distribution as n → infinity). He signs off with the intriguing comment:
Our final comment/speculation is on the space D, say, of probability distributions. A priori, D is just a set without any "topography". But we know at least one interesting point on D, the Gaussian distribution FG. By the central limit theorem, it lies in a "valley", and nearby distributions are drawn towards it. What we seem to be learning is that there are other interesting distributions, like F1 or F2, etc., which also lie in "valleys" and draw nearby distributions in towards them. This suggests that we equip D with some natural topological and Riemannian structure, and study the properties of D as a manifold per se.

2 Comments:

Anonymous said...

David --

It is common to model many variables with Normal distributions, but often modelers ignore interactions. If human weights are normally distributed, then human heights cannot be, since the relationship between weights and height is surely not perfectly linear.

In financial markets, where variables have traditionally been modeled as Normal, there is strong evidence that a distribution with fatter tails is appropriate (since rare events happen with more frequency than a Normal would suggest). People have used, for example, the Cauchy distribution instead. Unfortunately, its variance is infinite.

-- Peter

April 12, 2006 5:20 PM  
Anonymous said...

The paragraph of Deift you quote reminds me of work by statisticians looking at the differential geometry of the space of probability distributions, initiated I think by Bradley Efron and by Ole Barndorff-Neilsen. Deift seems unaware of this work. To anyone interested, I recommend the two nice books by Kass & Vos, and by Amari & Nagaoka.

@BOOK{kass:vos:book97,
author = "R. E. Kass and P. W. Vos",
title = "Geometrical Foundations of Asymptotic Inference",
publisher = "John Wiley and Sons",
year = "1997",
series = "Wiley Series in Probability and Statistics",
address = "New York, NY, USA"}

@BOOK{amari:nagaoka:ams93,
author = "S. Amari and H. Nagaoka",
title = "Methods of Information Geometry",
publisher = "American Mathematical Society",
year = "2000",
volume = "191",
series = "Translations of Mathematical Monographs",
address = "Providence, RI, USA",
note = "Originally published in Japanese by Iwanami Shoten, Tokyo, in 1993. Translated by D. Harada"}





-- Peter

April 12, 2006 6:13 PM  

Post a Comment

<< Home