In my struggle as a philosopher to make my work responsive to mathematics as actually and historically practised, I have generally found it illuminating to search for similarities between mathematics and the natural sciences. Now, this strikes some people as wrong-headed. Even if there are similarities between these two knowledge-acquiring practices, why focus on these and not what makes mathematics unique, say, its distinctive use of proof. But what if this choice between searching for similarities and searching for differences in the case of mathematics and science resembled that facing the zoologist wondering about that large sea creature - the whale? It seems clear to us now that we can be led to make important discoveries about the whale from previously acquiring knowledge about the elephant (anatomy, physiology, genetics, behaviour). After locating commonalities we can start to think about which features are unique to whales, and who knows an apparently unique feature there might find an analogue back amongst the elephants. You are probably thinking that I am likening the less accessible species - the whale - to mathematics, and the more accessible one - the elephant - to science. But consider the possibility that certain features of knowledge-acquisition are easier to detect in the case of mathematics. This crazy idea was one that the Hungarian mathematician George Polya adhered to. And in fact as early as 1941 he had worked out most of the principles of the probabilistic approach to epistemology known as Bayesianism. Much effort could have been saved if philosophers of science had listened to him. Well I believe there are many other reasons we might to look to mathematics: as an example of an extraordinarily intricate web of coherence meeting hierarchical foundations; clearer examples of the heuristic effects of thinking in different ways about the same object; the relationship between rationality and aesthetics; and, the tireless working over of ideas after publication.
This having been said, enormously more work has been carried out on the natural sciences, so it is natural to look to studies of science for inspiration. In the introduction to my book I raise five debates, contributions to which should illuminate the nature of mathematics. The first three are brought over from Ian Hacking's discussion of the natural sciences in 'The Social Construction of What?' (Harvard University Press, 1999).
1. Inherent structurism/nominalism: Can we make any sense of the idea of carving the definition of a concept correctly? Can we agree with Lakatos here?
"As far as naïve classification is concerned, nominalists are close to the truth when claiming that the only thing that polyhedra have in common is their name. But after a few centuries of proofs and refutations, as the theory of polyhedra develops, and theoretical classification replaces naïve classification, the balance changes in favour of the realist." (Lakatos Proofs and Refutations: 92n)
Why does Frege describe the qualities of good mathematical concepts in the same terms as modern exponents of the theory of natural kinds?
[Kant] seems to think of concepts as defined by giving a simple list of characteristics in no special order; but of all ways of forming concepts, that is one of the least fruitful. If we look through the definitions given in the course of this book, we shall scarcely find one that is of this description. The same is true of the really fruitful definitions in mathematics, such as that of the continuity of a function. What we find in these is not a simple list of characteristics; every element is intimately, I might almost say organically, connected with others. (Frege, Foundations of Arithmetic: 100)
2. Inevitability: given a mathematics as sophisticated as our own, how probable was it that concepts such as natural numbers, groups, groupoids would be devised? Do any concepts 'force' themselves upon our attention, and do they dictate to us the way they are used?
3. Reasons for stability: Why do we persist in teaching certain ways of thinking about particular concepts - social inertia or because that's what they are like?
4. Connectivity of mathematics: How should we represent the connectivity of mathematics on a scale running from thoroughly fragmented to very unified? Why are there so many apparently surprising connections?
5. Miraculousness of applicability: Choose a position between the amazement of Eugene Wigner and its deflation by those who take empirical research as the source of much mathematics.
In 1908, Henri Poincaré claimed
...the mathematical facts worthy of being studied are those which, by their analogy with other facts, are capable of leading us to the knowledge of a mathematical law, just as experimental facts lead us to the knowledge of a physical law. They are those which reveal to us unsuspected kinship between other facts, long known, but wrongly believed to be strangers to one another.
Here we have an expression of value coupled with an analogy to science. Chapter 4 of my book treats the subject of analogy but does not touch on the issue of 'mathematical law'. Given the enormous amount of effort philosophers of science have devoted to scientific laws, we might expect to be able to learn from them. In this article I argue we can do so, in particular, by contemplating the mathematical equivalent of a 'natural kind'. For example, just as we might characterise a term as worthless and gerrymandered (like 'groat', defined as being either red or a goat), or useful but not intrinsic (like the contrasting terms 'tree' and 'bush' for a gardener), or as designating a natural kind (such as 'electron' or 'gold'), similar assessments are passed on mathematical terms.
The example I give in the paper of a law-like statement is "Primes expressible as the sum of two squares are precisely 2 and those of the form 4n + 1". This appears to be a natural starting point for people interested in this question:
This regularity, at first blush so simple, is the germ of one of the major branches of the higher number theory and was the central theme of the development that began with Euler and Legendre in the eighteenth century, and continued down to our own time, with contributions by Gauss, Kummer, Takagi, and Artin. (Robert Langlands, Representation Theory: Its Rise and Its Role in Number Theory, 1990: 3).
I claim in the paper that law-like statements point beyond themselves to something deeper. The difficulty is in saying what exactly this is. Here's Langlands again:
The aesthetic tension between the immediate appeal of concrete facts and problems on the one hand, and, on the other, their function as the vehicle to express and reveal not so much universal laws as an entity of a different kind, of which the laws are the very mode of being, is perhaps more widely acknowledged in physics, where it has been accepted that the notions needed to understand perceived reality may bear little resemblance to it, than in mathematics, where oddly enough, especially among number theorists, conceptual novelty has frequently been deprecated as a reluctance to face the concrete and a flight from it. Developments of the last half-century have matured us, as an examination of Gerd Falting's proof of the Morell conjecture makes clear, but there is a further stage to reach. (ibid.: 32)
In my Rome paper, I speculates that an expected difference between a law-like mathematical fact and a "happenstantial" one is that the former may be categorified.
E.g., 8 X 3 = 6 + 8 + 10; 5 X 5 = 1 + 3 + 5 + 7 + 9; etc. can be categorified as cases of the Clebsch-Gordon rule for the decomposition of the tensor product of irreducible representations of the Lie group SU(2) as isomorphisms between vector spaces.
I should mention a possible problem with Poincaré's claim. When I spoke to this quotation at a philosophy of mathematics seminar in Cambridge, Prof. Garling brought up the case of three-dimensional topology, a subject to which Poincaré himself made very significant contributions, posing one of the central conjectures of the field that a three-dimensional manifold homotopic to the 3-sphere is homeomorphic to it. Through the 20th century mathematicians sought to resolve analogues of this question in other dimensions. The point is that while it looks like the Poincaré conjecture is true in all dimensions, there will be no systematic way of showing this. Where a general proof establishes its truth in dimensions 5 and above, dimension 4 required intricate geometric arguments using instantons and other field theoretic devices of the physicists, and the three-dimensional case has possibly just recently been proved using a range of novel methods. The question was raised as to whether the interest in the anomaly of the 3D case (and the 4D case) is connected to its being at a dimension of special relevance to us humans. Most mathematicians present reckoned that this was not an important factor, and that the failure of analogical attempts in any limited range of dimensions could prove of interest.
This having been said, research in three dimensions links strongly into the mathematical network as a whole:
The work, for example, of Thurston in three-dimensional geometry, aims at a classification of geometries that one can put on three-dimensional manifolds. This is much deeper than the two-dimensional theory. The Thurston program is by no means completed yet, and completing that program certainly should be a major challenge.
The other remarkable story in three dimensions is the work of Vaughan-Jones with ideas essentially coming from physics. This gives us more information about three dimensions, which is almost orthogonal to the information contained in the Thurston program. How to link those two sides of the story together remains an enormous challenge, but there are recent hints of a possible bridge. So this whole area, still in low dimensions, has its links to physics, but it remains very mysterious indeed. (Atiyah M. 2002: 14-15)
While the 'quantum topology', where Jones' work resides, utilises algebra tailored to the relevant dimension, there are significant relationships between the different dimensions.
So perhaps we should qualify Poincaré's claim as follows:
"... many of the mathematical facts worthy of being studied are those which, by their analogy with other facts, are capable of leading us to the knowledge of a mathematical law, just as experimental facts lead us to the knowledge of a physical law. They are those which reveal to us unsuspected kinship between other facts, long known, but wrongly believed to be strangers to one another. On the other hand, statements expressing analogues of known facts are also worthy of study, even if true but requiring very different methods of proof, which tie in with apparently unrelated facts, or if false but there being ways of connecting the failure to apparently unrelated facts."
The following comment made by the mathematician Terry Gannon might make the philosopher's pulse race:
"Math is not above metaphysics; like any area it grows by asking questions, and changing your perspective - even to a metaphysical one - should suggest new questions." (Monstrous Moonshine and the Classification of CFT, math.QA/9906167: 24)
"At last, they need us", we cry. But, alas, Gannon's kind of metaphysics does not relate to the notions of trope theory, Lewisian possible worlds, or temporal parts, but is rather a quest for the 'deeper common situation' which is the source for the 'ubiquity' in mathematics of certain meta-patterns, examples of which are the A-D-E simply laced Dynkin diagrams, modular functions, and the number 24. The common situation for the latter, for instance, is that 24 has the property that its divisors and only they are values of n for which every x coprime to n satisifies x² = 1 (mod n).
Still, there is something to say about the values surfacing here since a competing aesthetic finds this kind of repetition a little tiresome, more of a sign that mathematicians think within a limited psychological framework than that they are discovering the deepest of structural truths. (See, e.g., Zeilberger's Opinion 49.)
I would like to see philosophers of mathematics make contributions to our understanding of specific parts of mathematics other than set theory and arithmetic. In chapter 10 of my book I examine a research programme going by the name of higher-dimensional algebra. Four reasons why philosophers should investigate higher-dimensional algebra (or higher-dimensional category theory) are as follows:
(a) Many important constructions may profitably be seen as the 'categorification' of familiar constructions. Indeed, very important structures are reached quickly by categorifying simple structures such as the natural numbers or the integers. In a few steps you can hop from the integers to the use of quantum groups to provide invariants for knots, braids and tangles. In other words, a measure of the simplicity of a construction is given by the ease with which it may be expressed in this language.
(b) It provides a way of organising a considerable proportion of mathematics. It shows us that set theory talks about just one corner of the mathematical universe, where set theory is taken in its category theoretic sense, i.e., free of the unwanted structure provided by membership trees, and offers the potential for a clearer idea of the structuralism operating within mathematics. There are many more shades of sameness between identity and difference than isomorphism. This points beyond most philosophers' conceptions of category theory, as the study of structure-preserving mappings between structured sets. Already at the level of categories, where equivalent yet not isomorphic categories are the 'same' for most mathematical purposes, this is inadequate.
(c) These constructions have applications in mathematics, computer science, and physics. In particular, we hear that "higher-dimensional algebra is the perfect language for topological quantum field theory" (Baez). A project of casting the mathematical languages used in physics in this language suggests itself, both to explore the structural realist thesis and to suggest ways to reconcile different theories, perhaps even general relativity and quantum field theory.
(d) Higher-dimensional algebra blurs the distinction between topology and algebra. Pieces of algebraic notation are taken as dimensioned topological entities inhabiting a space. Deformation within that space then corresponds to calculation. In this way, higher-dimensional algebra accounts for many uses of diagrams as means with which to calculate and reason. As Leinster puts it, "There is no clear line between mathematical language and 'real' mathematics."
Some of the participants of this programme have a very strong image of what mathematics should be like:
"...it makes sense to spend at least a little time going back and thinking about simple things. This can be a bit embarrassing, because we feel we are supposed to understand these things completely already. But in fact we do not. And if we never face up to this fact, we will miss out on opportunities to make mathematics as beautiful and easy to understand as it deserves to be." (J. Baez and J. Dolan, 'From Finite Sets to Feynman Diagrams', Mathematics Unlimited - 2001 and Beyond)
From considerations of the simple notions of counting and division, they end up rendering the commutation relations between quantum field theory's annihilation and creation operators as:
"If we have a box with some balls in it, there is one more way to put an extra ball in and then take a ball out than there is to take a ball out and then put one in."
Now, I believe that higher-dimensional algebra/category theory can lay claim to the epithet 'foundational', wrestling the term from exclusively logical approaches. In this respect I can count on the support of Yuri Manin:
"I will understand 'foundations' neither as the para-philosophical preoccupation with the nature, accessibility, and reliability of mathematical truth, nor as a set of normative prescriptions like those advocated by finitists or formalists. I will use this word in a loose sense as a general term for the historically variable conglomerate of rules and principles used to organize the already existing and always being created anew body of mathematical knowledge of the relevant epoch. At times, it becomes codified in the form of an authoritative mathematical text as exemplified by Euclid's Elements. In another epoch, it is better expressed by the nervous self-questioning about the meaning of infinitesimals or the precise relationship between real numbers and points of the Euclidean line, or else, the nature of algorithms. In all cases, foundations in this wide sense is something which is relevant to a working mathematician, which refers to some basic principles of his/her trade, but which does not constitute the essence of his/her work." (Georg Cantor and His Heritage: 6, AG/0209244)
Manin then describes how twentieth century foundations saw the passage from Cantorian sets to categories, and then on to n-categories: "This vision, due initially to Grothendieck, extends the boundaries of classical mathematics, especially algebraic geometry, and exactly where it interacts with modern mathematical physics."
From 7 to 18 June 2004, I participated in a fascinating workshop on n-categories. On some days there were as many as 8 hours of lectures either treating or comparing the dozen or so definitions of n-categories or explaining their applications. I am utterly convinced that this research programme will mark a decisive turning point in the history of mathematics. Many of the speakers have made their lecture notes or slides available at the workshop site, so you read about how n-categories can help reconcile quantum mechanics and general relativity, how they help the French nuclear industry's computers avoid deadlock, or, in my own contribution, how they can revitalise philosophy.
Like several of the other participants at this workshop, Eugenia Cheng has felt called upon to express her philosophical reflections about mathematics. Her essay on mathematical morality at her site is well worth reading. 'Morality' here has nothing to do with a mathematician's social obligations, rather it concerns the idea of getting to the bottom of a construction to see how it really works. Over several years John Baez has constructed a priceless site, and there are many interesting items at Ronnie Brown's.
For too long philosophers have been concerned with what can be said 'locally' about any specific piece of mathematical reasoning, e.g., can it be cast in some favoured formal system? More recently we find some interest in subtler local qualities, e.g., the connections of a piece of mathematics to the rest of the mathematical network brought out by studies of mathematical explanation and the nature of good proofs. We might wonder, however, whether we might also strive towards a more 'global' understanding of the network. On this score, Vladimir Arnold has made some fascinating suggestions in his 'Polymathematics'. (See also another account of these ideas in Lecture 2 of the Toronto Lectures on this page.) Arnold suggests that there is a way of thinking systematically about the mysterious relations between apparently diverse fields so frequently noted by mathematicians via various informal processes: "The informal complexification, quaternionization, symplectization, contactization etc., described below, are acting not on such small things, as points, functions, varieties, categories or functors, but on the whole of mathematics. I have successfully used these ideas many times as a method to guess new results. I hope therefore that in the future this method of the multiplication of mathematics will be as standard, as is now the transition from finite-dimensional linear algebra to the theory of integral equations and to functional analysis."
Although from Arnold's frequent declarations of support for Poincaré's style over Hilbert's, characterised often, though rather inaccurately, as the geometric over the algebraic, one might guess that he would disapprove of a systematising algebra such as category theory, he does remark that: "The main dream (or conjecture) is that all these trinities are united by some rectangular "commutative diagrams". I mean the existence of some "functorial" constructions connecting different trinities. (Arnold lecture 2: 10)
I shall take heart from this dream and extend here a scheme I outlined in Chapter 10 of my book, an amalgamation of a scheme of Sir Michael Atiyah with one of Baez and Dolan, which derives in part from another giant of the twentieth century, Alexandre Grothendieck:
|19th century||The study of functions of one (complex) variable||The codification of 0-category theory (set theory).|
|20th century||The study of functions of many variables||The codification of 1-category theory|
|21st century||Infinite-dimensional mathematics||The codification of n-category theory, and infinite dimensional-category theory|
Of course, this is not to say that in the nineteenth century linear algebra in higher dimensions wasn't being developed right up to algebras of functionals operating on infinite-dimensional function spaces. Atiyah's claim is about global, non-linear mathematics.
"What about the 21st century? I have said the 21st century might be the era of quantum mathematics or, if you like, of infinite-dimensional mathematics. What could this mean? Quantum mathematics could mean, if we get that far, 'understanding properly the analysis, geometry, topology, algebra of various non-linear function spaces', and by 'understanding properly' I mean understanding it in such a way as to get quite rigorous proofs of all the beautiful things the physicists have been speculating about." (Atiyah 2002: 14)
This work requires generalising the duality between position and momentum in classical mechanics: "This replaces a space by its dual space, and in linear theories that duality is just the Fourier transform. But in non-linear theories, how to replace a Fourier transform is one of the big challenges. Large parts of mathematics are concerned with how to generalise dualities in nonlinear situations. Physicists seem to be able to do so in a remarkable way in their string theories and in M-theory understanding those non-linear dualities does seem to be one of the big challenges of the next century as well." (Atiyah 2002:15)
Atiyah M.(2002) 'Mathematics in the 20th Century', Bulletin of the London Mathematical Society 34(1), 1-15.
Probabilistic existence proofs
The existence of a mathematical object is established by proving that the probability of its existence is positive. The technique was first devised by Paul Erdös. In a few cases the proof can be translated to a counting argument, but this is not so in general. Rota calls for a "correspondence principle" to allow for this translation, or, failing this, a "new logic associated with probabilistic reasoning" (might Bayesianism be the key?). It would be good to prove his following jibe wrong: "philosophers of mathematics are too incompetent to deal with a problem which ought to be their bailiwick."
Like Baez and Dolan's categorification and Arnold's symplectisation, quaternionisation, etc., another important process applied to large parts of mathematics is q-deformation. In some sense, the mathematics of finite sets is mathematics in which a parameter q takes the value 1. Much fun is to be had studying q-deformed mathematics for other values of q (zero, infinity, prime power, complex number of modulus one). Curiously, this deformation, which finds powerful use in quantum physics, began in the nineteenth century long before quanta had been imagined. What is going on here? See this historical paper. Also explore "tropical" mathematics, which has be described as a dequantised form of classical mathematics.
The philosophy of geometry has become a sub-branch of philosophy of physics. But, naturally enough, mathematicians have been hard at work devising new ways of thinking about space. An excellent starting point is this expository article by Pierre Cartier, which relates two powerful mathematical approaches to space developed by Alexandre Grothendieck and Alain Connes. For comments by Connes on this relation see pp. 20-21 of A View of Mathematics.
Take a mathematical concept and examine arguments for why it should be studied. My study of groupoids revealed much about some mathematicians images of mathematics, but other well-chosen examples should reveal new insights. Timothy Chow suggested to me that matroids would be a good choice.
In chapter 7 of my book I argue that Lakatos does not see how formalization can act as a springboard for the further elaboration of intuition, but rather focuses on the negative aspects of formalization as marking the end of a period of creative freedom. Similar thoughts about the continuing dialectic of mathematical development are beautifully captured by the Russian mathematician Yuri Manin:
"...the most fascinating thing about algebra and geometry is the way they struggle to help each other to emerge from the chaos of non-being, from those dark depths of subconscious where all roots of intellectual creativity reside. What one "sees" geometrically must be conveyed to others in words and symbols. If the resulting text can never be a perfect vehicle for the private and personal vision, the vision itself can never achieve maturity without being subject to the test of written speech. The latter is, after all, the basis of the social existence of mathematics.
A skillful use of the interpretative algebraic language possesses also a definite therapeutic quality. It allows one to fight the obsession which often accompanies contemplation of enigmatic Rorschach's blots of one's imagination.
When a significant new unit of meaning (technically, a mathematical definition or a mathematical fact) emerges from such a struggle, the mathematical community spends some time elaborating all conceivable implications of this discovery. (As an example, imagine the development of the idea of a continuous function, or a Riemannian metric, or a structure sheaf.) Interiorized, these implications prepare new firm ground for further flights of imagination, and more often than not reveal the limitations of the initial formalization of the geometric intuition. Gradually the discrepancy between the limited scope of this unit of meaning and our newly educated and enhanced geometric vision becomes glaring, and the cycle repeats itself."
(Yuri Manin, 'Von Zahlen und Figuren', math.AG/0201005)
Studies of this kind of process are desperately needed.
In the Preface to his book, Euler: The Master of Us All (The Mathematical Association of America, 1999), William Dunham, commenting on what appears from our modern standards of rigour as illegimate reasoning, claims that (p. xviii) "as often happens with Euler's "mistakes," we come to realise that, though this be mathematical madness, yet there is method in it." In his Mathemagics (A tribute to L. Euler and R. Feynman), Pierre Cartier rather sees the Eulerian style as an alternative way of doing mathematics, one continued in the twentieth century by Richard Feynman:
Abstract. The implicit philosophical belief of the working mathematician is today the Hilbert-Bourbaki formalism. Ideally, one works within a closed system: the basic principles are clearly enunciated once for all, including (that is an addition of twentieth century science) the formal rules of logical reasoning clothed in mathematical form.
My thesis is: there is another way of doing mathematics, equally successful, and the two methods should supplement each other and not fight. This other way bears various names: symbolic method, operational calculus, operator theory ...
In this article I make a case for this "other method" of doing mathematics, by discussing several instances where it has led to, respectively will (hopefully) lead to, fruitful insights and developments.
There is much to be said for being very tolerant towards apparently suspect manoeuvres. The robustness of a calculational system subjected to abuse suggests that there are important principles at stake, which eventually may be cast in however rigorous a setting one desires.
See also the book Knots and Physics (World Scientific) by the knot theorist Louis Kauffman for many examples of what appear to be dubious manipulations of notation.
After Lakatos and Polya, yet another Hungarian with interesting things to say about mathematics, this time Michael Polanyi. I recommend you read pages 184-193 of his Personal Knowledge (Routledge 1958). This may encourage you to read the rest of the book where you will find observations such as:
A new mathematical conception may be said to have reality if its assumption leads to a wide range of new interesting ideas. (Personal Knowledge: 116),
an aspect of his definition of reality as "that which may yet inexhaustibly manifest itself", implying "the presence of an indeterminate range of anticipations in any knowledge bearing on reality". This sums up much of my own philosophy of mathematics. In that the ramifications of a significant mathematical conceptualisation are never exhausted, there can be no timeless appraisal.
Polanyi's understanding of the nature of mathematics was easily sufficient to make him realise the role of values other than logical correctness:
We should declare instead candidly that we dwell on mathematics and affirm its statements for the sake of its intellectual beauty, which betokens the reality of its conceptions and the truth of its assertions. For if this passion were extinct, we would cease to understand mathematics; its conceptions would dissolve and its proofs carry no conviction. Mathematics would become pointless and would lose itself in a welter of insignicant tautologies and of Heath Robinson operations, from which it could no longer be distinguished. (Personal Knowledge: 192)
"Reality of its conceptions" presupposes that some proposed conceptions will fail to pass muster. This is surprisingly infrequently observed. In the following rare example where it is, the author, invoking the American counterpart of Heath Robinson, demonstrates his near complete lack of mathematical judgement:
Stated in realist terms, the extended number system [i.e., the complex numbers - DC] is presumed in effect to stake out a "natural kind" of reality. Far from "carving reality at the joints", however, the system can be shown to feature a flagrantly gerrymandered fragment of heterogeneous reality that is hardly suited to enshrinement at the centre of a serious science like physics, not to mention a rigorous one like pure mathematics. Couched in these ultra-realist terms, the puzzle might be thought to be one that someone with more pragmatic leanings - the system works, doesn' t it? - need not fret over; and in fact such a one might even look forward to exploiting it to the discomfort of the realist. Fair enough. I should be happy to have my discussion of this Rube Goldberg contraption (as the extended number system pretty much turns out to be) serve as a contribution to the quarrel between anti-realist and realist that is being waged on a broad front today (J. Bernardete, Metaphysics: The Logical Approach, OUP 1989: 106).
So much for power of the "logical approach" to discern mathematical reality. I'd rather sign up to an approach which maintains that:
...while in the natural sciences the feeling of making contact with reality is an augury of as yet undreamed of future empirical confirmations of an immanent discovery, in mathematics it betokens an indeterminate range of future germinations within mathematics itself. (Personal Knowledge: 189)
On that score the complex numbers, of course, do extremely well (cf. p. 186). Imagine contemporary mathematics without them!
Rota G.-C. 1997, Indiscrete Thoughts, Palombi F. (ed.) Boston : Birkhäuser.
Gian-Carlo Rota, recently deceased professor of mathematics and of philosophy at MIT, annoyed a few people with some of his philosophical writing, including The Pernicious Influence of Mathematics Upon Philosophy. Whatever one's view on this, there's plenty of thought provoking material contained here.
Lakatos, I. 1976, Proofs and Refutations, The Logic of Mathematical Discovery, Worrall J. and Zahar E. (eds.), Cambridge University Press.
Hopelessly inaccurate even as a schematic history of early algebraic topology, somewhat superficial as a phenomenology of discovery, too captivated by its struggle with an imaginary opponent, the formalist, to make more of the excellent choice of its subject matter, this work still remains an oasis in a desert of neo-logicism.
Lawvere, F. W. and Schanuel, S. 1997, Conceptual Mathematics, a first introduction to categories, Cambridge University Press.
Probably the best way to learn some basic category theory if starting out with a modest background in mathematics.
Mac Lane, S. 1986, Mathematics: Form and Function, New York: Springer-Verlag.
One of those books full of promising leads for the young philosopher of mathematics.
Maddy, P. 1997, Naturalism in Mathematics, Oxford: Clarendon Press.
Winner of the 2003 Lakatos Award, Maddy argues for a means-end analysis of axiom choice in set theory, and a dispersal of philosophical fog. There's no reason why this approach should not be adopted for other branches of mathematics.
Brown, J. R. 1999, Philosophy of Mathematics: an introduction to the world of proofs and pictures, Routledge.
Accessible introduction to a range of philosophical topics, including less visited ones such as the nature of notation.
Aspray, W. and Kitcher, P. (eds.) 1988, History and Philosophy of Modern Mathematics, University of Minnesota Press.
Gillies, D. (ed.) 1992, Revolutions in Mathematics, Oxford University Press.
Kac M., Rota G.-C. and Schwartz J. 1986, Discrete Thoughts: Essays in Mathematics, Science, and Philosophy, Boston: Birkhaüser.
End of millennium surveys:
Alon A., Bourgain J., Connes A., Gromov M. and Milman V. (eds.) 2000, GAFA 2000 Visions in Mathematics: Towards 2000, Boston: Birkhäuser.
Arnold V., Atiyah M., Lax P. and Mazur B. (eds.) 2000 Mathematics frontiers and perspectives, Providence, American Mathematical Society.
Engquist, B. and Schmid, W. (eds.) 2001, Mathematics Unlimited - 2001 and Beyond, Springer-Verlag.