I think this print is by Takeji Asano. The glowing lights in the houses and especially the boat are very inviting amid the darkness. You want to go inside.
I'll use 'ring' to mean 'commutative ring'.
The continuous real-valued functions on a topological space form
a ring
Indeed, it's all very nice:
Even better, if
In short, points of a compact Hausdorff space
The problem starts when we try to generalize from
At first all seems fine: for any ideal
But then the trouble starts. Any continuous map of spaces
But if we define 'points' to be maximal ideals it doesn't work. Given
a homomorphism
Why not?
To tell if an ideal is maximal you have to run around comparing it
with all other ideals! This depends not just on the ideal itself, but
on its 'environment'. So
In short, the logic we're using to define 'maximal ideal' is too complex! We are quantifying over all ideals in the ring, so the concept we're getting is very sensitive to the whole ring — so maximal ideals don't transform nicely under ring homomorphisms.
It turns out prime ideals are much better. An ideal
Now things work fine: given a homomorphism
Why do prime ideals work where maximal ideals failed?
It's because checking to see if an ideal is prime mainly involves checking things within the ideal — not its environment! None of this silly running around comparing it to all other ideals.
And we also get a substitute for our beloved fields: integral domains.
An integral
domain is a ring where if
So: by giving up our attachment to fields, we can work with concepts that are logically simpler and thus 'more functorial'. We get a contravariant functor from rings to sets, sending each ring to its set of prime ideals.
With maximal ideals, life is much more complicated and messy.
Why weird? Only because I'd been avoiding this sort of math until now. For some reason I took a dislike to commutative algebra as a youth. Dumb kid.
I liked stuff I could visualize so I liked the idea of a vector
bundle: a bunch of vector spaces, one for each point in a topological
space
In this example the vector bundle is the Möbius strip. The space
But this nice easy-to-visualize stuff is also commutative algebra!
For any compact
Hausdorff space
(I liked this sort of commutative algebra because it involved a lot of things like I knew how to visualize, like topology and analysis. Also, C*-algebras describe the observables in classical and quantum systems, with commutative ones describing the classical ones.)
Now, given a vector bundle over
Again, we can turn this around: every finitely generated projective
module of
So: whenever someone says 'finitely generated projective module over a commutative ring' I think 'vector bundle' and see this:
But to get this mental image to really do work for me, I had to learn how to 'localize' a projective module of a commutative ring 'at a point' and get something kind of like a vector space. And then I had to learn a bunch of basic theorems, so I could use this technology.
I could have learned this stuff in school like the other kids. I've sort of read about it anyway — you can't really avoid this stuff if you're in the math biz. But actually needing to do something with it radically increased my enthusiasm!
For example, I was suddenly delighted by Kaplansky's theorem. The
analogue of a 'point' for a commutative ring
When you do this, any module
Furthermore, a map between of modules of a commutative ring R has an
inverse iff it's true 'localized at each point'. Just like a map
between vector bundles over
I know all the algebraic geometers are laughing at me like a 60-year-old who just learned how to ride a tricycle and is gleefully rolling around the neighborhood. But too bad! It's never too late to have some fun!
By the way, the quotes above come from this free book, which I'm enjoying now:
Stirling's formula gives a good approximation of the factorial
The easiest way to see where the
This is famous: when you square it, you get an integral with circular
symmetry, and the
But how do you get an integral that equals
Next, write
There's just one problem: the 'something' also involves
But we can solve this problem by writing
Oh yeah — but what about proving Stirling's formula? Don't worry, this will be easy if we can do the hard work of approximating that integral. It's just a bit of algebra:
So, this proof of Stirling's formula has a 'soft outer layer' and a
'hard inner core'. First you did a bunch of calculus tricks. But now
you need to take the
Luckily you have a pal named Laplace....
Laplace's method is not black magic. It amounts to approximating your
integral with a Gaussian integral, which you can do by hand.
Physicists use this trick all the time! And they always get a factor
of
You can read more details here:
But in math, there are always mysteries within mysteries. Gaussians show up in probability theory when we add up lots of independent and identically distributed random variables. Could that be going on here somehow?
Yes! See this:
So I don't think we've gotten to the bottom of Stirling's formula! Comments at the n-Category Café contain other guesses about what it might 'really mean'. But they haven't crystallized yet.
'Mathemagics' is a bunch of tricks that go beyond rigorous
mathematics. Particle physicists use them a lot. Using a
mathemagical trick called 'zeta
function regularization', we can 'show' that infinity factorial
is
This formula doesn't make literal sense, but we can write
To understand this trick we need to notice that
To learn much more about this, read Cartier's article:
Stirling's formula for the factorial looks cool — but what does it
really mean? This is my favorite explanation. You don't see the
numbers
My description in words is informal. I'm really talking about a
Poisson
distribution. If raindrops land at an average rate
At time
Next, what's the formula for a Gaussian with mean
I learned about this on Twitter: Ilya Razenshtyn showed how to prove Stirling's formula starting from probability theory this way. But it's much easier to use his ideas to check that my paragraph in words is a way of saying Stirling's formula.
They called Democritus 'the laughing philosopher'. He not only came up with atoms: he explained how weird it is that science, based on our senses, has trouble explaining what it feels like to sense something.
And he did it in a way that would make a great comedy routine with two hand puppets.
By the way: a lot of my diary entries these days are polished versions of my tweets. Yesterday Dominic Cummings retweeted the above tweet of mine. He was chief advisor to Boris Johnson for about a year, and on Twitter he loves to weigh in on the culture wars, on the conservative side.
It's not as weird as when Ivanka Trump liked my tweet about rotations in 4 dimensions, but still it's weird. Or maybe not: Patrick Wintour in The Guardian reported that "Anna Karenina, maths and Bismarck are his three obsessions." But I wish some nice bigshots would like or retweet my stuff.
I love this movie showing a solution of the Kuramoto–Sivashinsky equation, made by Thien An. If you haven't seen her great math images on Twitter, check them out!
I hadn't known about this equation, and it looked completely crazy to me at first. But it turns out to be important, because it's one of the simplest partial differential equations that exhibits chaotic behavior.
As the image scrolls to the left, you're seeing how a real-valued
function
As time passes, bumps form and merge. I conjecture that they never split or disappear. This reflects the fact that the Kuramoto–Sivashinsky equation has a built-in arrow of time: it describes a world where the future is different than the past.
The behavior of these bumps makes the Kuramoto–Sivashinsky equation an excellent playground for thinking about how differential equations can describe 'things' with some individuality, even though their solutions are just smooth functions.
For much more on the Kuramoto–Sivashinsky conjecture, including attempts to make my conjecture precise and my work with Steve Huntsman and Cheyne Weis to get numerical evidence for it, read these:
Another nice illusion by Akioyishi Kitaoka.
The Iranian mathematician Farideh
Firoozbakht made a strong conjecture in 1982: the
Firoozbakht's conjecture says the gaps between primes don't get too big too fast. As you can see here, it's stronger than Cramér's or Granville's conjectures on prime gaps — and it gets scarily close to being wrong at times, like when the largest prime gap jumps from 924 to 1132 at the prime 1693182318746371 — you can see that big horizontal black line above.
Farideh Firoozbakht checked her conjecture up to about 4 trillion using a table of large gaps between primes: those are what could make the conjecture false. In 2015 Alexei Kourbatov checked it up to 4 quintillion, in this paper here:
If true, Firoozbakht's conjecture will imply a lot of good stuff, listed in this paper by Ferreira and Mariano:
It's known that the Riemann Hypothesis implies the
Nobody knows how to prove Firoozbakht's conjecture using the Riemann Hypothesis. But conversely, it seems nobody has shown Firoozbakht implies Riemann!
There are even some good theoretical reasons for believing
Firoozbakht's conjecture is false! Granville has a nice
probabilistic model of the behavior of primes that predicts that
there are infinitely many gaps over
For more on Granville's probabilistic model of prime and its consequences for the largest prime gaps, see:
Farideh Firoozbakht, alas, died in 2019 at the age of 57. She studied pharmacology and later mathematics at the University of Isfahan and later taught mathematics at that university. That, and her conjecture, is all I know about her. I wish I could ask her how she came up with her conjecture, and ask her a bit about what her life was like.
If anyone knows, please tell me!
I thank Maarten Morier for telling me about Firoozbakht's conjecture.