![]() |
|
![]() |
This month, before he goes to Oxford to begin a master's program in Mathematics and the Foundations of Computer Science, Brendan Fong is visiting the Centre for Quantum Technologies and working with me on stochastic Petri nets. He's proved two interesting results, which he wants to explain.
To understand what he's done, you need to know how to get the rate equation and the master equation from a stochastic Petri net. We've almost seen how. But it's been a long time since the last article in this series, so today I'll start with some review. And at the end, just for fun, I'll say a bit more about how Feynman diagrams show up in this theory.
Since I'm an experienced teacher, I'll assume you've forgotten everything I ever said.
(This has some advantages. I can change some of my earlier terminology—improve it a bit here and there—and you won't even notice.)
Definition. A Petri net consists of a set
We can draw pictures of Petri nets. For example, here's a Petri net with two species and three transitions:
It should be clear that the transition 'predation' has one wolf and one rabbit as input, and two wolves as output.
A 'stochastic' Petri net goes further: it also says the rate at which each transition occurs.
Definition. A stochastic Petri net is a Petri net together with a function
Starting from any stochastic Petri net, we can get two things. First:
The master equation is stochastic: it describes how probabilities change with time. The rate equation is deterministic.
The master equation is more fundamental. It's like the equations of quantum electrodynamics, which describe the amplitudes for creating and annihilating individual photons. The rate equation is less fundamental. It's like the classical Maxwell equations, which describe changes in the electromagnetic field in a deterministic way. The classical Maxwell equations are an approximation to quantum electrodynamics. This approximation gets good in the limit where there are lots of photons all piling on top of each other to form nice waves.
Similarly, the rate equation can be derived from the master equation in the limit where the number of things of each species become large, and the fluctuations in these numbers become negligible.
But I won't do this derivation today! Nor will I probe more deeply into the analogy with quantum field theory, even though that's my ultimate goal. Today I'm content to remind you what the master equation and rate equation are.
The rate equation is simpler, so let's do that first.
Suppose we have a stochastic Petri net with
It's really a bunch of equations, one for each
The right-hand side is a sum of terms, one for each transition in our Petri net. So, let's start by assuming our Petri net has just one transition.
Suppose the
where
That's really all there is to it! But we can make it look nicer. Let's make up a vector
that says how many things there are of each species. Similarly let's make up an input vector
and an output vector
for our transition. To be cute, let's also define
Then we can write the rate equation for a single transition like this:
Next let's do a general stochastic Petri net, with lots of transitions. Let's write
For example, consider our rabbits and wolves:
Suppose
Let
If you stare at this, and think about it, it should make perfect sense. If it doesn't, go back and read Part 2.
Now let's do something new. In Part 6 I explained how to write down the master equation for a stochastic Petri net with just one species. Now let's generalize that. Luckily, the ideas are exactly the same.
So, suppose we have a stochastic Petri net with
To keep the notation clean, let's introduce a vector
and let
Then, let's take all these probabilities and cook up a formal power series that has them as coefficients: as we've seen, this is a powerful trick. To do this, we'll bring in some variables
as a convenient abbreviation. Then any formal power series in these variables looks like this:
We call
The simplest example of a state is a monomial:
This is a state where we are 100% sure that there are
The master equation says how a state evolves in time. It looks like this:
So, I just need to tell you what
It's called the Hamiltonian. It's a linear operator built from special operators that annihilate and create things of various species. Namely, for each state
and a creation operator:
How do we build
where as usual we've introduce some shorthand notations to keep from going insane. For example:
and
Now, it's not surprising that each transition
How can we understand it? The basic idea is this. We've got two terms here. The first term:
describes how
is a bit harder to understand, but it says how the probability that nothing happens—that we remain in the same pure state—decreases as time passes. Again this happens due to our transition
In fact, the second term must take precisely the form it does to ensure 'conservation of total probability'. In other words: if the probabilities
Let's look at an example. Consider our rabbits and wolves yet again:
and again suppose the rate constants for birth, predation and death are
where
and
where the Hamiltonian is
and
In each case, the first term is easy to understand:
The second term is trickier, but I told you how it works.
How do we solve the master equation? If we don't worry about mathematical rigor too much, it's easy. The solution of
should be
and we can hope that
so that
Of course there's always the question of whether this power series converges. In many contexts it doesn't, but that's not necessarily a disaster: the series can still be asymptotic to the right answer, or even better, Borel summable to the right answer.
But let's not worry about these subtleties yet! Let's just imagine our rabbits and wolves, with Hamiltonian
Now, imagine working out
We'll get lots of terms involving products of
And suppose we want to compute
as part of the task of computing
We can draw this as a sum of Feynman diagrams, including this:
In this diagram, we start with one rabbit and one wolf at top. As we read the diagram from top to bottom, first a rabbit is born (
This is just one of four Feynman diagrams we should draw in our sum for
will involve a lot of Feynman diagrams... and of course computing
will involve even more, even if we get tired and give up after the first few terms. So, this Feynman diagram business may seem quite tedious... and it may not be obvious how it helps.
But it does, sometimes!
Now is not the time for me to describe 'practical' benefits of Feynman diagrams. Instead, I'll just point out one conceptual benefit. We started with what seemed like a purely computational chore, namely computing
But then we saw—at least roughly—how this series has a clear meaning! It can be written as a sum over diagrams, each of which represents a possible history of rabbits and wolves. So, it's what physicists call a 'sum over histories'.
Feynman invented the idea of a sum over histories in the context of quantum field theory. At the time this idea seemed quite mind-blowing, for various reasons. First, it involved elementary particles instead of everyday things like rabbits and wolves. Second, it involved complex 'amplitudes' instead of real probabilities. Third, it actually involved integrals instead of sums. And fourth, a lot of these integrals diverged, giving infinite answers that needed to be 'cured' somehow!
Now we're seeing a sum over histories in a more down-to-earth context without all these complications. A lot of the underlying math is analogous... but now there's nothing mind-blowing about it: it's quite easy to understand. So, we can use this analogy to demystify quantum field theory a bit. On the other hand, thanks to this analogy, all sorts of clever ideas invented by quantum field theorists will turn out to have applications to biology and chemistry! So it's a double win.
You can also read comments on Azimuth, and make your own comments or ask questions there!
![]() |
|
![]() |