Send As SMS

Friday, February 10, 2006

Mathematics and first principles

Denis Lomas, who has written a review of my book for MAA Reviews (members only), recently reminded me of a review, Mitteleuropa am Aldwych, written by the philosopher Ian Hacking of a book of correspondence between Paul Feyerabend and Imre Lakatos. Lakatos may best be known to you for his dialogue Proofs and Refutations in which a teacher and students argue about the meaning and proofs of the Euler conjecture that for a polyhedron V - E + F = 2, the letters standing for the number of vertices, edges and faces. Here is Hacking:
Lakatos's contribution to the philosophy of mathematics was, to put it simply, definitive: the subject will never be the same again. For decades the philosophy of mathematics was about foundations, set theory, paradoxes, axioms, formal logic and infinity - an agenda set by Bertrand Russell, among others, beefed up by the truly wonderful discoveries of Kurt Gödel. Lakatos made us think instead about what most research mathematicians do. He wrote an amazing philosophical dialogue around the proof of a seemingly elementary but astonishingly deep geometrical idea pioneered by Euler. It is a work of art - I rank it right up there with the dialogues composed by Hume or Berkeley or Plato.

Praise indeed, and yet elsewhere, as I commented at pp. 7-8 of a paper discussing Michael Friedman's Dynamics of Reason, Hacking describes Lakatos as a 'deflator' when it comes to mathematics. By this he means that Lakatos is showing that as mathematics proceeds, if it is carried out properly with plenty of critical discussion, a point will be arrived at where the definitions are such that results will follow easily from them. A theory which was initially driven by (quasi)-empirical facts has become merely a collection of analytic statements, true by virtue of meaning. Now, I think this is to get Lakatos very wrong, as I suggest on p.8 of the Friedman review. Yes, it's all about having good definitions, but they're good for Lakatos to the extent that they're right, or at least more right than their predecessors.

Urs Schreiber and I have been discussing related ideas about when one feels one has understood a construction properly. The strange thing is that there's almost a disincentive to reformulate a field to make it as well-organised as possible so as to allow a principled understanding. Some of this may be down to the temporary advantage you'll gain if you alone thoroughly grasp a field and can produce a string of new results which appear to your rivals to be arrived at rather mysteriously. But there's also this other issue that you will be thought to have made the results of the field trivial, or true simply in virtue of meaning.

I don't recall anywhere Lakatos providing us with the philosophical resources to help us ward off the charges of deflation or trivialisation. Instead, I think we should look to Alasdair MacIntyre's revival of Aristotelianism, in particular pages 184-5 of MacIntyre A. (1998) 'First Principles, Final Ends and Contemporary Philosophical Issues' in K. Knight (ed.) The MacIntyre Reader, Polity Press, pp. 171-201. This paper will appear in the first of two volumes of collected papers with Cambridge University Press this April. It's no easy matter to read an extract from one of his papers from a standing start, but here is a taste of what he says:

That first principles expressed as judgments are analytic does not, of course, entail that they are or could be known to be true a priori. Their analyticity, the way in which subject-expressions include within their meaning predicates ascribing essential properties to the subject and certain predicates have a meaning such that they necessarily can only belong to that particular type of subject, is characteristically discovered as the outcome of some prolonged process of empirical enquiry. That type of enquiry is one in which, according to Aristotle, there is a transition from attempted specification of essences by means of prescientific definitions, specifications which require acquaintances with particular instances of the relevant kind (Posterior Analytics, 93a21-9), even although a definition by itself will not entail the occurrence of such instances, to the achievement of genuinely scientific definitions in and through which essences are to be comprehended. (184)

...the analyticity of the first principles is not Kantian analyticity, let alone positivist analyticity. (185)


MacIntyre goes on discuss truth in terms of adequacy of the intellect to its subject matter. Clearly relating these ideas to mathematics needs an extended treatment, beyond what I have given in my How Mathematicians May Fail to be Fully Rational, e.g., pp. 12-13.

9 Comments:

dennis said...

In the same article Hacking goes on to remark about Proofs and Refutations:

"Traditional views about mathematics get played out, but also Lakatos's own ideas of mathematics as evolutionary and concrete, engaged in the creation of new concepts and the fixing of old ones so that, in the end, permanent necessary mathematical truth is created by a process of which the dialogue itself is both example and illustration."

Leaving aside your remarks about Hacking on Lakatos, the above is a view of part of the mathematical experience. A “ fallible” process leads to permanent necessary mathematical truth. Many theorists in and around mathematics contend that mathematics in Lakatos’s account never emerges from the "fallible" process. Hacking provides a forceful alternative.

February 11, 2006 12:29 PM  
david said...

Hmm. There seems to be a tension, then, in what Hacking is saying about Lakatos. I think most people don't understand Lakatos's notion of fallibility properly, and that Hacking in this quotation is pointing us in the right direction. But to the extent that Lakatos sees proof analysis as the engine of conceptual change, he's going to get himself caught in a dead-end as a far more robust rigour emerges in the early 20th century. You wouldn't be able to produce falsifiers of the kind - a cyclinder doesn't satisfy V - E + F = 2 - since you would never now be as lax with your definition of a polyhedron.

In my book I try to develop his notion of a 'heuristic falsifier' to allow it to function for contemporary mathematics. E.g., in chapter 9, where I discuss the van Kampen conception of understanding the path structure of a space through those of its parts. This doesn't work for as simple a shape as the circle. What you need are groupoids rather than groups. Another example is where some of the spaces Connes considers don't fit well with classical topology.

I also talk about this on pp. 18-19 of this. I think the best way forward is to treat fallibility in the following way: we may have a sophisticated mathematical conceptions of space, dimension, symmetry, quantity, etc., but we should accept that mathematicians in the future will quite possibly find these conceptions limited.

Vladimir Arnold seems to be driving at the same point:

"Sylvester (1876) already described as an astonishing intellectual phenomenon, the fact that general statements are simpler than their particular cases. The antibourbakiste conclusion that he drew from this observation is even more striking. According to Sylvester, a mathematical idea should not be petrified in a formalised axiomatic setting, but should be considered instead as flowing as a river. One should always be ready to change the axioms, preserving the informal idea.

Consider for instance the idea of a number. It is impossible to discover quaternions trying to generalise real, rational, complex, or algebraic number fields."

February 11, 2006 1:25 PM  
dennis said...

david, you write:

I think the best way forward is to treat fallibility in the following way: we may have a sophisticated mathematical conceptions of space, dimension, symmetry, quantity, etc., but we should accept that mathematicians in the future will quite possibly find these conceptions limited.

A lot depends on what quite possibly will be found limited and on what “quite possibly” and “limited” mean. If all the results of mathematics quite possibly will be found limited, if “quite possibly” means the same as “likely”, and if “limited” means the same as “wrong”, then the claim becomes: all mathematical results will likely be found wrong. But this claim is not very convincing. One can point (as I think I remarked before) to hosts of results in finitary mathematics. These surely will endure.

dennis

February 11, 2006 6:39 PM  
dennis said...

I don't understand the MacIntyre quotation. Perhaps Lonergan, who also wrote in the Aristotelean tradition, is saying somewhat the same thing:

As every schoolboy knows, a circle is a locus of coplanar points equidistant from a center. What every schoolboy does not know is the difference between repeating that definition as a parrot might and uttering it intelligently. So, with a sidelong bow to Descartes’s insistence on the importance of understanding very simple things, let us inquire into the genesis of the definition of the circle.…

Imagine a cartwheel with its bulky hub, its stout spokes, its solid rim.

Ask a question. Why is it round?

Limit the question. What is wanted is the immanent reason or ground of the roundness of the wheel. Hence a correct answer will not introduce new data such as carts, carting, transportation, wheelwrights, or their tools. It will refer to the wheel.

Consider a suggestion. The wheel is round because its spokes are equal. Clearly, that will not do. The spokes could be equal yet sunk unequally into the hub and rim. Again, the rim could be flat between successive spokes.

Still, we have a clue. Let the hub decrease to a point; let the rim and spokes thin out into lines; then, if there were an infinity of spokes and all were exactly equal, the rim would have to be perfectly round; inversely, were any of the spokes unequal, the rim could not avoid bumps or dents. Hence we can say that the wheel necessarily is round inasmuch as the distance from the center of the hub to the outside of the rim is always the same. (From Insight, Lonergan 1957, pp. 31-32)

dennis

February 11, 2006 6:54 PM  
dennis said...

Sorry for so many post. In a previous post I said:

One can point (as I think I remarked before) to hosts of results in finitary mathematics. These surely will endure.

This is can be misinterpreted. Better:

One can point to a great many results in finitary mathematics -- results with will surely endure.

February 11, 2006 8:33 PM  
david said...

Of course particular finite combinatorial facts won't be overthrown. But their place in the system changes when they are found to be consequences of deeper laws. Here are two facts: if you add one stone to a 3 x 3 square of stones, they can be arranged to form a triangle; if you add 7 stones, you can form another square. At our current stage of knowledge, the latter is more important. In my Mathematical Kinds paper I argue this at greater length for the facts: reversing the decimal digits of '13' yields another prime; 13 = 2^2 + 3^2. The latter fact is much more important to us, and it's hard to conceive of this assessment being reversed.

What is at stake is conceptual organisation. In combinatorics, debates at the highest level will concern the important of Joyal's theory of species or Rota's umbral calculus.

February 13, 2006 12:12 PM  
dennis said...

David you wrote in the previous post:
"Of course particular finite combinatorial facts won't be overthrown. But their place in the system changes when they are found to be consequences of deeper laws. Here are two facts: if you add one stone to a 3 x 3 square of stones, they can be arranged to form a triangle; if you add 7 stones, you can form another square. At our current stage of knowledge, the latter is more important.”

And in a post before that:

“I think the best way forward is to treat fallibility in the following way: we may have a sophisticated mathematical conceptions of space, dimension, symmetry, quantity, etc., but we should accept that mathematicians in the future will quite possibly find these conceptions limited.”

Are there really two questions or issues here? One question is whether or not there are secure results (facts, propositions, etc.) ( “secure” = “necessary and permanent”). The other questions is the significance of these results relative to a current or future system. A feature of mathematical knowledge is that results survive changes of system. If they did not, the results would not be secure. This is quite different that their relative importance within systems. Conceivably their importance could both ebb and flow.

Thus, there seems to be two bases on which a result can be judged. One concerns its certainty; the other concerns its significance within a system. This can change, obviously. But that shifting has no bearing on its certainty.

If these two issues are not clearly separated – and in current discussions they typically are not – ascriptions of fallibility become too general, applying rightly to evolving systems, but wrongly to results. (Actually, the term “fallible” and its cognates may be inappropriate even in the limited context of applying to systems. “Evolving” or “developing” seems more appropriate.)
regards, DL

March 04, 2006 2:58 PM  
david said...

Denis,

I think Lakatos is not altogether clear on this separation, and this largely because he sees proof analysis as the main engine for conceptual change, while understanding that the developing rigour of the twentieth century isn't going to offer much scope to find *falsifiers* for proved statements. With the priviso that terms at one time, even in our rigorous era, may later be found to need finessing into different variants, your separation is reasonable. I supppose the point I want to insist on is that too much philosophical attention has been lavished on certainty and far too little on significance. The bolder step to take is to say that the "bringing new concepts out of the dark" and "getting concepts right" aspects of mathematics are primary.

March 04, 2006 8:11 PM  
dennis said...

It might be interesting to look at the issue of certainty (rigour, etc.) from the point of view of the many ways in which it arises in mathematical practice. Results are discarded if shown to be wrong. There is a reticence to sanction results based on long proofs, especially if they are understood only by a small community. Worry about results based on too many conjectures is another way in which the issue of certainty arises. Mathematicians get nervous if conjectures litter an arena. (You mention something like this in your book, I recall.) Similarly, there is the concern about computer-generated results. Can they be said to be certain (rigorous, secure, truly part of mathematical knowledge, etc.)? Also, some mathematicians like to prove things in a number of ways, in particular in ways that rely on simpler mathematics than contained in original proofs, again a reflection (in part anyway) of a concern for certainty. DL

March 04, 2006 11:56 PM  

Post a Comment

<< Home