### Converging Realities

Happy New Year to everyone! Hampered by the overindulgence of the past few days, but envigorated by blasts of Atlantic air along the coastal path of Cornwall (please don't switch off the Gulf stream - ten degrees Centigrade is very pleasant at Christmas when you're fifty degrees North of the Equator), I managed to finish Roland Omnes' 'Converging Realities' (Princeton). I have to review this book very soon. In fact, I should already have reviewed it, two deadlines having passed. I feel a certain sympathy with Omnes, not least because his choice of subtitle for this English translation - 'Toward a Common Philosophy of Physics and Mathematics' - conveys the same kind of desire for philosophy of mathematics to be something else as that expressed by the title of my book. Not that Omnes seems to be aware of the agenda that grips Anglophone philosophers of mathematics. And here lies a problem.

Something akin to a complementarity principle applies in the way philosophy of mathematics operates in the English speaking world. Either you buy into an established agenda which traces a lineage back to Quine and Putnam and ponders questions such as whether it is right to say that we are committed to the existence of mathematical objects because we use them in our best science. This gives you the advantage that work can be conducted in scholarly fashion, there are a set of recognised contributions to the debate, a sense of progress is generated. The drawback, if you care, is that just about everyone else, and most especially mathematicians, think what you're doing is beside the point. The alternative, then, is to write an ambitious, unworked out thesis. This is usually done by 'outsiders'. Insiders may enjoy reading this work, but are unlikely to do the kind of detailed filling in necessary for it to become part of the discipline.

Omnes' book suffers from a host of errors concerning the theories of various philosophers, while others are treated far too briefly. But what is good about the book is that its positive thesis - physism: mathematics and the laws of physics are one and the same thing - forces us to attend to the changing relationship between mathematics and physics over the centuries. Some when asked suggest that it is not at all surprising that our mathematics fits the world, no more surprising than that our lungs are adapted to the world's atmosphere. But Omnes has a different story to tell. Where mathematics and physics grow together to generate the classical science of Newtonian mechanics, they will later part through the early twentieth century, only to be reunited later in that century. The separation happens as each partner independently goes through a series of crises: physics through the relativistic and quantum revolutions, mathematics through the discovery of pathological functions, the incompleteness of Euclid's axioms, the foundational paradoxes, and so on. Given this divergence and convergence of paths, Omnes believes his 'physism' to be the most reasonable explanation. You might put it thus: string theorists don't learn to study noncommutative tori from dunking doughnuts in their coffee.

The best part of the book, perhaps through my ignorance, is an explanation of how physicists recover our everyday classical world from the quantum substrate. Omnes worked himself on this program. 'Decoherence' is the buzzword. I'll leave an explanation to those better informed, but you can see how much philosophy would be changed if Omnes' thesis were to triumph and anyone who wanted to contribute to philosophy of mathematics had to know quantum mechanics.

I was reminded reading this book of John Baez's Quantum Quandaries. Here, sets as the basis of mathematics are taken to have arisen from our encounter with a classical world of discrete, identifiable, located things. As we depart from this world to that of quantum mechanics and general relativity, we develop different categories of objects, Manifolds and cobordisms, Hilbert spaces and operators, which are more like each other than either is to Sets and functions, both being symmetric monoidal categories with duals, where the monoid operation is noncartesian. A question to finish on then. Can one characterise decoherence in terms of the way a symmetric monoidal category with duals 'looks' cartesian when one forms the tensor product of sufficiently many objects?

Something akin to a complementarity principle applies in the way philosophy of mathematics operates in the English speaking world. Either you buy into an established agenda which traces a lineage back to Quine and Putnam and ponders questions such as whether it is right to say that we are committed to the existence of mathematical objects because we use them in our best science. This gives you the advantage that work can be conducted in scholarly fashion, there are a set of recognised contributions to the debate, a sense of progress is generated. The drawback, if you care, is that just about everyone else, and most especially mathematicians, think what you're doing is beside the point. The alternative, then, is to write an ambitious, unworked out thesis. This is usually done by 'outsiders'. Insiders may enjoy reading this work, but are unlikely to do the kind of detailed filling in necessary for it to become part of the discipline.

Omnes' book suffers from a host of errors concerning the theories of various philosophers, while others are treated far too briefly. But what is good about the book is that its positive thesis - physism: mathematics and the laws of physics are one and the same thing - forces us to attend to the changing relationship between mathematics and physics over the centuries. Some when asked suggest that it is not at all surprising that our mathematics fits the world, no more surprising than that our lungs are adapted to the world's atmosphere. But Omnes has a different story to tell. Where mathematics and physics grow together to generate the classical science of Newtonian mechanics, they will later part through the early twentieth century, only to be reunited later in that century. The separation happens as each partner independently goes through a series of crises: physics through the relativistic and quantum revolutions, mathematics through the discovery of pathological functions, the incompleteness of Euclid's axioms, the foundational paradoxes, and so on. Given this divergence and convergence of paths, Omnes believes his 'physism' to be the most reasonable explanation. You might put it thus: string theorists don't learn to study noncommutative tori from dunking doughnuts in their coffee.

The best part of the book, perhaps through my ignorance, is an explanation of how physicists recover our everyday classical world from the quantum substrate. Omnes worked himself on this program. 'Decoherence' is the buzzword. I'll leave an explanation to those better informed, but you can see how much philosophy would be changed if Omnes' thesis were to triumph and anyone who wanted to contribute to philosophy of mathematics had to know quantum mechanics.

I was reminded reading this book of John Baez's Quantum Quandaries. Here, sets as the basis of mathematics are taken to have arisen from our encounter with a classical world of discrete, identifiable, located things. As we depart from this world to that of quantum mechanics and general relativity, we develop different categories of objects, Manifolds and cobordisms, Hilbert spaces and operators, which are more like each other than either is to Sets and functions, both being symmetric monoidal categories with duals, where the monoid operation is noncartesian. A question to finish on then. Can one characterise decoherence in terms of the way a symmetric monoidal category with duals 'looks' cartesian when one forms the tensor product of sufficiently many objects?

## 10 Comments:

"Can one characterise decoherence in terms of the way a symmetric monoidal category with duals 'looks' cartesian when one forms the tensor product of sufficiently many objects?"

I think I see what you are after. I am not sure yet what the best answer would be, but I don't think that 'sufficiently large tensor products' do the trick, at least not by themselves.

What physicists call 'decoherence' is an effect that takes place when you take a state in the tensor product of two Hilbert spaces and then perform a 'partial trace' on this state, namely a trace over only one of the two Hilbert spaces involved.

Technically this does not have to involve 'large spaces' at all. It applies equally well to the tensor product of two 2-dimensional Hilbert spaces, for instance.

The reason that 'large' Hilbert spaces play a role is that these give you - physically - the justification for performing that partial trace.

Usually one considers a Hilbert space S describing a small 'system' (like a single electron, for instance), and a Hilbert space E describing the 'environment' (all other degrees of freedom that couple to the system, like maybe other electrons, or phonons of the solid which the electron sits in).

In practice, the number of degrees of freedom in the 'environment' is huge and cannot be monitored. This means that for all practical purposes we can just as well average the evolution of the 'system' over all possible states of the 'enironment'.

This averaging corresponds in the formalism to taking the 'partial trace' over just the Hilbert space of the enironment.

You should write down what such a partial trace amounts to for a simple example, say of the tensor product of two 2D Hilbert spaces. (I can provide more details if my explanation here is not helpful.) The result is that it turns a pure state into a mixed state. THAT's what makes the system 'classical' in a sense.

The huge body of work on 'decoherence' is concerned with analysing what this very simple idea implies in detail on various systems, and in particular to study the time-dependence of this process.

There is an obvious arrow-theoretic version of talking about partial traces. Apart from that, I don't know yet if there is a nice category-theoretic way to characterize how that makes Hilb 'more cartesian'. I bet there is some way to see this nicely, but right now I cannot see it.

Thanks! I suppose one ought to be careful about equating classical with cartesian. After all, the category of sets and relations is noncartesian, as is the category of sets and joint probabililty distributions.

"[...] category of sets and joint probabililty distributions."

That sounds good. Something along these lines will be the right thing to consider.

Decoherence is a way to go from "quantum probabilities" to "classical probabilities". It projects the set of all states of a system to a subset of such states. But the result is still something probabilistic.

If nobody else, John Baez should have an idea how to turn this into a proper category-theoretic statement.

Perhaps Sets and conditional probability distributions is the right kind of category: objects are sets, arrows A-->B are conditional probability distributions P(b|a), composition forms the Bayesian network A-->B-->C where P(c|a,b) = P(c|b),

so P(c|a) = Sum_b P(c|b).P(b|a).

I wonder if one could make any sense of Judea Pearl's work thinking of Bayesian networks as arrows in a monoidal category.

Perhaps it was no coincidence writing this entry and the following one on successive days. If you look up John's Quantum Gravity notes for Fall 2003, you can find matrix mechanics over different rigs (rings without negatives), including the reals and something like the tropical semiring I mention in the later entry (Ex. 7 on p. 21).

I read Omnès' book "Alors l'un devint deux" some time ago. I have a vague project since quite a while to write a book myself about a "spinozian philosphy of mathematics". When I read the through the contents of the book I said to myself "the b...d did it before me !". Fortunately when I arrived at the chapter on Spinoza I realized I did not agree with Omnès at all. In short he boldly explained why Spinoza was secretly a dualist. As a whole I found his book both irritating and very interesting and thought provoking. I read your review of his new book and it seems to me he changed his mind a little bit : in the older book he was advocating a position he named "birealism" which was that both the world of forms (logos) and the empirical reality existed (physis). Now it seems he has adopted the view that math and physics is one and the same thing, which is closer to what I have in mind.

A last thing : you say you agree with Omnès in rejecting Hersh's argument about why our mathematical idea fit the world. In fact I don't really see a contradiction between the "lung argument" and "physism". The idea is that if we inhabited an other planet we would have different lungs, but what if there are no other planet ? What if there is a single possible model of lungs ?

David said "[...] if one could make any sense of Judea Pearl's work [...]"

Could you briefly sketch what you have in mind here? What's the idea of Judea Pearl's work?

Fabien, Omnes' objection to the lung argument is that the parallel breaks down. Our lungs work to allow us to live in an atmosphere which hasn't changed (much) over our evolutionary history. The tasks we expect mathematics to do, however, have radically changed. We needed to locate, re-locate, count, measure, organise, estimate, etc. in a world of identifiable, middle-sized objects situated in a 3-D Euclidean world. We now want mathematics to predict and understand the very large and very small (and manipulate the latter), where entities and properties are very different. The parallel would be to expect our lungs to cope in a very altered atmosphere. The rate we're polluting the planet we may need them to do just this.

Urs, a Bayesian network is just a way of representing a joint probability distribution over a number of variables, X_1,...,X_n. One could, of course, factor the joint distribution in many ways, like P(x_1,...x_n)=P(x_1).P(x_2|x_1).P(x_3|x_2,x_1)..., but it is possible that there are conditional independencies like

P(x_4|x_2) = P(x_4|x_3,x_2,x_1,x_1), which would make for a more efficient storage.

A Bayesian network for this distribution is a directed acyclic graph, whose nodes are the X_i, such that we only need the conditional probability distribution of each node given the values of its parents, example.

The next thought is that sparse nets are likely to occur if one has captured the relevant casual structure. A classic example is a captain giving orders to a two man firing squad. Variables: order to shoot given, soldier 1 fires, soldier 2 fires, condemned man dies. If we know who has fired, we don't need to know whether the order has been given.

Some (Glymour, Spirtes, etc.) have hoped to extract causal information out of joint probability data by looking for sparse graphs to represent it. Pearl in Causality (2000) has developed a calculus for calculating with these networks in cases such as where I fix the value of a variable, e.g, in a network for lung cancer, if the variable for vitamin D consumption is fixed, rather than observed and so possibly dependent on socio-economic status, etc.

These slides of Pearl are a good introduction.

Thanks a lot for the explanation and the helpful lnks.

The quantum generalizations of classical (joint) probability distributions on phase space is the Wigner distribution (e.g. I, II). It approaches a true ('classical') probability distribution in the classical limit. Maybe this gadget would mediate the transformation from Vect to BBN that we are looking for.

Post a Comment

<< Home