### November 1-12

**November 2005**

**Saturday 12**

When I mentioned before (Nov 5) Alexander Borovik's notion of 'vertical integration', I had it slightly wrong. His term is 'vertical *unity*', which he contrasts with the usual form of unity so beloved by mathematicians:

Many eloquent speeches were made, and many beautiful books written in explanation and praise of the incomprehensible unity of mathematics. In most cases, the unity was described as a cross disciplinary interaction, with the same ideas being fruitful in seemingly different mathematical disciplines, and the technique of one discipline being applied to another. The vertical unity of mathematics, with many simple ideas and tricks working both at the most elementary and at rather sophisticated levels, is not so frequently discussed— although it appears to be highly relevant to the very essence of mathematics education.

Were this form of unity commonplace, it might give us hope that David Hilbert was correct when he said:

A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street.Perhaps, though, people in the streets of 1920s Gottingen were particularly bright. So often the ingredients of sophisticated ideas are individually graspable while their composition seems opaque.

I recently leafed through the paper 'A Survey of Lagrangian Mechanics and Control on Lie algebroids and groupoids', Jorge Cortes et al., ArXiv: math-ph/0511009, interested to see what new was happening with groupoids, the subject of chapter 9 of my book. One of the lines of advocacy for groupoids over groups is that they interact with other structures in novel ways. This is the case for Lie groupoids. So I was happy to see that the authors of the paper wanted to:

...show how the flexibility provided by Lie algebroids and groupoids allows us to analyze, within a single framework, different classes of situations such as systems subject to nonholonomic constraints, mechanical control systems, Discrete Mechanics and Field Theory

But it's no easy matter to take these Lie algebroids onboard. I'm not sure the authors of this paper did the best job of it. They did not follow Israel Gelfand's advice of giving the simplest nontrivial example straight after the definition. Eventually a case is presented of a ball rolling on a rotating plate, and one can start to see the motion of the ball fibred over the space of points of contact between ball and plate, and the 'anchor map' to the motion of the centre of the ball.

A little more insight on Lie algebroids came from Urs Schreiber's blog entry. Schreiber is a string theorist working now in Hamburg, who has linked up with John Baez recently to write a couple of papers on categorified gauge theory. Lie algebroids appear on this blog entry in the vector bundle version of Baez's n-categories table. Landsman's 'Lie Groupoids and Lie algebroids in physics and noncommutative geometry' ArXiv: math-ph/0506024 is also helpful.

A philosopher who can throw some light on the difficulty of grasping mathematical concepts, even though their components are simple, is Michael Polanyi. In his article Tacit Knowing: Its Bearing on Some Problems of Philosophy (Reviews of Modern Physics, 34 (4)Oct. 1962, 601-616), Polanyi explains his idea that much of our grasping of things requires tacit knowledge of their constituents. For example, to understand a sentence one has to have tacit knowledge of its constituent words. If we choose to focus instead on the constituents themselves, we will not be able to comprehend the whole. Just concentrate on the individual words of this sentence to see what he means. Mathematical constructions involve towers of blended concepts, and one must have the constituents sufficiently well-understood that one can flick between different focus points, allowing 'encapsulation' and 'de-encapsulation' to use terms from Borovik.

I also like Polanyi for his account of mathematical reality. You may be able to glimpse something of his notion from these two quotations taken from his 1958 book *Personal Knowedge*:

A new mathematical conception may be said to have reality if its assumption leads to a wide range of new interesting ideas. (Personal Knowledge: 116),

...while in the natural sciences the feeling of making contact with reality is an augury of as yet undreamed of future empirical confirmations of an immanent discovery, in mathematics it betokens an indeterminate range of future germinations within mathematics itself. (Personal Knowledge: 189)

This is an exemplification of his idea of reality as "that which may yet inexhaustibly manifest itself".

**Tuesday 8**

It's going to be a lot of fun and a huge amount of hard work for future historians and philosophers of science to make sense of the development of a theory of quantum gravity. Perhaps the most informative debate I've read about the current status of string theory is here. Some genuine mutual understanding seems to be achievable if participants debate reasonably charitably.

I've been keeping an eye on contributions to the notion of a field with one element. On the face of it the idea is absurd. Fields by definition must have at least two elements. But there's plenty of evidence that there has to be something like it. Here's Anton Deitmar in his paper 'Cohomology of F1-schemes' ArXiv: NT/0508642:

The analogy between number fields and function fields is one of the most striking phenomena in number theory. Unfortunately, it does not go all the way. In order to use methods of algebraic geometry for the integers, number theorists would like to view Spec Z as a geometrical object (a curve) over a 'field of one element' F_1. A field of one element does not exist. So one has to look for a replacement that would grant the desired geometrical methods for number theoretical problems.

This analogy was the principal one I studied in the chapter on analogy in my book. The idea of a field with one element goes back at least to Tits in 1957. Lots of formulas concerning fields of order p^{n} make sense when p = 1, so long as you give them the right interpretation. For example a vector space over F_1 should be seen as a finite pointed set. Some more motivation from Deitmar:

The F_1-viewpoint as it stands won't solve any problems in number theory, because, for instance, all prime numbers look the same from F_1. It is clear that something has to be added to make this theory useful to arithmetic. Based on the philosophy that all problems in arithmetic stem from the entanglement of addition and multiplication, this is an attempt to disentangle them, respectively, to investigate multiplication alone. Later on they will have to be joined again.See weeks 184 and 187 of John Baez's This Week's Finds for some of his typical user-friendly exposition on this issue.

As a confirmation of the rule that in mathematics you are never more than a couple of steps away from any other field, Angus MacIntyre in Model Theory: Geometry and Set-theoretic Aspects and Prospects, The Bulletin of Symbolic Logic 9(2), June 2003, is advocating to model

theorists that they look to Grothendieck for inspiration:

Van den Dries' insights are certainly close to those of Grothendieck in [Esquisse d'un programme - see link below in Sept 30], though my feeling is that the potential of[Esquisse] is far from exhausted...For my taste, he [Grothendieck] is unrivalled in terms of ability to select notions, axioms and theorems of maximum potential... This is a kind of "atomic" model theory, where set theory is again largely irrelevant. (203)[Tameness, one of Grothendieck's issues in Esquisse, is also discussed by MacIntyre.] Then:

I sense that we should be a bit bolder by now. There are many issues of uniformity associated with the Weil Cohomology Theories, and major definability issues relating to Grothendieck's Standard Conjectures. Model theory (of Henselian fields) has made useful contact with motivic considerations, including Kontsevich's motivic integration. Maybe it has something useful to say about"algebraic geometry over the one element field", ultimately a question in definability theory. (211)

Then, Lo and Behold, in the same article MacIntyre discusses the VC-dimension, just the thing I'm working on here in Tuebingen. See this Technical Report if you want to know what it has to do with Karl Popper. Please note that this is still just a draft.

**Saturday 5**

I'm out of the country so can't see if special efforts are being made to 'celebrate' the 400th anniversary of the foiling of the plot to kill James I and the aristocracy. More than 200 years later Catholics were still not allowed to vote in elections to Parliament.

Why are so many of the world's top mathematicians Russian? (Manin, Kontsevich, Drinfeld, Gelfand, Beilinson, Voevodsky, ...) Presumably, much must be attributed to desirability for very intelligent people to work in an area with little state interference, when other disciplines such as economics are controlled. Lack of opportunity for money-making outside the university must be another factor. But presumably the largest contributor was a policy of carefully selecting and hot-housing promising youngsters. They must have got something right as regards their teaching techniques.

It is not surprising then that one of the most important contributions to the conference 'Where will the next generation of UK mathematicians come from?' held at the university of Manchester in March 2005 came from the pen of the Russian emigre, Alexander Borovik. His piece is entitled What is It That Makes a Mathematician? I like this description of the life of the mathematician:

Mathematicians are sometimes described as living in an ideal world of beauty and harmony. Instead, our world is torn apart by inconsistencies, plagued by non sequitur, and worst of all, made desolate and empty by missing links between words, and between symbols and their referents; we spend our lives patching and repairing it. Only when the last crack disappears, are we rewarded by brief moments ofHis discussion of 'vertical integration' is very important. It gives you hope that even the most advanced concepts are explicable to lesser mortals. Borovik's diagnosis of the crisis in British mathematics education is given here.

harmony and joy. And what do we do then? We start to work on a new problem, descending again into chaos and mental pain. We do that to earn the next fix of elation. (p. 3)

For more about scientific bets (Nov 3) see this New Scientist article.

**Thursday 3**

When I included a chapter on Bayesianism in Mathematics in my book, I did so with the hope that it would draw a few more philosophers to look at mathematical practice. There are many considerations affecting the plausibility of mathematical statements, from the verification of cases to the establishment of subtle analogies. Here's an example to reconstruct in Bayesian terms:

...it is my view that before Thurston's work on hyperbolic 3-manifolds and his formulation of the general Geometrization Conjecture there was no consensus amongst experts as to whether the Poincare Conjecture was true or false. After Thurston's work, notwithstanding the fact that it has no direct bearing on the Poincare Conjecture, a consensus developed that the Poincare Conjecture (and the Geometrization Conjecture) were true. Paradoxically, subsuming the Poincare Conjecture into a broader conjecture and then giving evidence, independent from the Poincare Conjecture, for the broader conjecture led to a firmer belief in the Poincare Conjecture. (John W. Morgan, 'Recent Progress on the Poincare Conjecture and the Classification of 3-Manifolds', Bulletin of the American Mathematical Society 2004,42(1): 57-78)It doesn't sound at all paradoxical to me, if you take Polya's "hope for a common ground" into account, see chapter 5 of my book.

There seems to be a resistance though to these ideas. George Polya had already worked out most things by the 1940s, but was largely ignored by Bayesian philosophers. When I was talking about this idea in 1999, nobody remembered a 1987 article written by James Franklin, a mathematician at the University of New South Wales, entitled "Non-deductive Logic in Mathematics", *British Journal for the Philosophy of Science ** 38*: 1-18 (available here).

Some Bayesian philosophers object to mathematics being treated in this 'quasi-empirical' way. They take it as a tenet that it is irrational to accord logically equivalent statements different degrees of belief. If A follows logically from B, and Pr(A) is less than Pr(B) then you are incoherent, even if you do not know this relation.

A much more interesting response is the Lakatosian one. Putting it in my own terms, it would run like this: the whole point of Proofs and Refutations was to show that mathematical concepts change their meanings. Imagine if in the early 1800s you bet someone that the relation V - E + F = 2 holds for all polyhedra. They accept, then point out that the cyclinder has V = 0, E = 2, F = 2, so is a counter-example to the relation. But you don't accept the cylinder as a polyhedron, and fighting breaks out. What can you do to formulate a precise bet? Appeal to the long term: I bet that 100 years from now the majority of mathematicians will understand the term 'polyhedron' in such a way that V - E + F = 2 holds for it? It would be a shame that, had you lasted that long, you would have lost. You sensed that there was an important relation in the air and that it was worth refining a definition of polyhedron within a theoretical framework with the resources to understand the relation. You just overlooked that 'polyhedron' might come to embrace torus-shaped entities, and so on.

Replying to Lakatos, one can agree that concept-stretching is in many ways more important than the plausibility of results, but that there are many situations with enough of a solid framework to allow for precise bets. If someone asks you to bet on whether the 10^30th zero of the zeta function satisfies the Riemann Hypothesis, you have Saunders MacLane's immortal poetry to guide you:

Norm Levinson managed to show, better yet,

At two-to-one odds it would be a good bet,

If over a zero you happen to trip

It would lie on the line and not just in the strip.

I seem to recall von Neumann taking part in a mathematical bet. I'm sure that must have been others. In a sense, any research career involves a series of gambles as to what is likely to work, what is likely to prove important, etc. For some scientific ones, see Wikipedia's Scientific wager article.**Update**

It wasn't von Neumann, it was Hermann Weyl. In his article, Predicativity, Solomon Feferman explains:

A story here, recounted in my book [In the Light of Logic, Oxford Univ. Press], is apropos:

...a famous wager was made in Zurich in 1918 between Weyl and George Polya, concerning the future status of the following two propositions: (1) Each bounded set of real numbers has a precise upper bound. (2) Each infinite subset of real numbers has a countable subset. [The latter requires the Axiom of Choice.] Weyl predicted that within twenty years either Polya himself or a majority of leading mathematicians would admit that the concepts of number, set and countability involved in (1) and (2) are completely vague, and that it is no use asking whether these propositions are true or false, though any reasonably clear interpretation would make them false... . the loser was to publish the conditions of the bet and the fact that he lost in the Jahresberichten der Deutschen Mathematiker Vereinigung... (Feferman 1998, p. 57)The wager was never settled as such, for obvious political reasons. According to Polya (1972) ['Eine Erinnerung an Hermann Weyl', Mathematische Zeitschrift 126, 296-29], “The outcome of the bet became a subject of discussion between Weyl and me a few years after the final date, around the end of 1940. Weyl thought he was 49% right and I, 51%; but he also asked me to waive the consequences specified in the bet, and I gladly agreed.” Polya showed the wager to many friends and colleagues, and, with one exception, all thought he had won.

I'd still like to know if there has been a straightforward case of odds being offered on a conjecture.

**Tuesday 1**

Peter Woit's blog Not Even Wrong is the most prominent space on the Web for criticisms of string theory. Take the October 26 entry and its 90+ replies. The contributors there are expressing their philosophy of science as they wrestle with the problem of the right way to go about a theory of quantum gravity. Speaking about the ways researchers *ought* to conduct their studies, terms referring to the virtues or their lack, such as 'honest' or 'arrogant', naturally appear. Is there anything philosophers can contribute to the debate?

There's a battle-line weaving through the disciplines that study science - philosophy, history, sociology - between those who by and large believe science to be a rational process in some absolute sense and those who do not. Reacting to a simplistic tale of the confirmation of theories, sociologists submitted scientific episodes to close scrutiny and found 'rationality' nowhere to be seen, except as a word bandied about by participants who give it their own gloss. Anthropological studies in the laboratory would find each of two groups calling the other 'unscientific', but when questions were raised about the scientificity of their own work would reply with "it doesn't matter, truth will out in the long run." Clearly, a more sophisticated rationalist position is needed.

Such a position must recognise, as do participants of the Woit discussion, that rival programmes need not have precisely the same aims. Results that group A produces may be taken to be important by them, while group B takes them to be insignificant. A clear case of this where I'm working now involves getting machines to classify hand-written digits. At one point, the frequentist camp had the most accurate classifier. But the Bayesian camp had a classifier which although its error rate was higher could tell you which digits it was least certain of. If its least certain 2% were excluded, it achieved extremely good error rates. And this performance could not be emulated by the frequentists. So, even in what appears to be a very narrow field, where it would appear that there could be little disagreement as to the goals, we find conflicting appraisals of achievements.

So what can we hope for? Surely great care in characterising the goals and achievements of a programme, with the understanding that this characterisation will need to be rethought as the programme unfolds. But along with this we need other intellectual virtues, such as honesty, and a lack of pride preventing the acknowledgement of current weaknesses of one's own programme, along with recognition that the other programme might have resources to comprehend these weaknesses. This wedding of the language of the virtues to the rationality of enquiry is characteristic of the philosopher I have mentioned here before, Alasdair MacIntyre. The rivalry that has most influenced his work is that between the Aristotelians and Augustinians in 13th century Paris. He has attempted to characterise what was necessary for Aquinas to be in a position to reconcile the two doctrines, and use each to resolve the other's weaknesses. For one thing, it required someone to learn both languages as 'first' languages, something often frowned upon by native speakers of each language.

Perhaps MacIntyre's most difficult advice to put into effect is that a research programme lay out what it considers to be its current weaknesses. For this to be possible, it would require a further virtue from any rival groups, that they are sufficiently just not to exploit unfairly such a confession for their own self-promotion.

## 0 Comments:

Post a Comment

<< Home