![]() |
|
![]() |
We're in the middle of a battle: in addition to our typical man vs. equation scenario, it's a battle between two theories. For those good patrons following the network theory series, you know the two opposing forces well. It's our old friends, at it again:
Today we're reporting live from a crossroads, and we're facing a skirmish that gives rise to what some might consider a paradox. Let me sketch the main thesis before we get our hands dirty with the gory details.
First I need to tell you that the battle takes place at the intersection of stochastic and quantum mechanics. We recall from Part 16 that there is a class of operators called 'Dirichlet operators' that are valid Hamiltonians for both stochastic and quantum mechanics. In other words, you can use them to generate time evolution both for old-fashioned random processes and for quantum processes!
Staying inside this class allows the theories to fight it out on the same turf. We will be considering a special subclass of Dirichlet operators, which we call 'irreducible Dirichlet operators'. These are the ones where starting in any state in our favorite basis of states, we have a nonzero chance of winding up in any other. When considering this subclass, we found something interesting:
Thesis. Let
This might sound like a riddle, but today as we'll prove, riddle or not, it's a fact. If it makes sense, well that's another issue. As John might have said, it's like a bone kicked down from the gods up above: we can either choose to chew on it, or let it be. Today we are going to do a bit of chewing.
One of the many problems with this post is that John had a nut loose on his keyboard. It was not broken! I'm saying he wrote enough blog posts on this stuff to turn them into a book. I'm supposed to be compiling the blog articles into a massive LaTeX file, but I wrote this instead.
Another problem is that this post somehow seems to use just about everything said before, so I'm going to have to do my best to make things self-contained. Please bear with me as I try to recap what's been done. For those of you familiar with the series, a good portion of the background for what we'll cover today can be found in Part 12 and Part 16.
As John has mentioned in his recent talks, the typical view of how quantum mechanics and probability theory come into contact looks like this:
The idea is that quantum theory generalizes classical probability theory by considering observables that don't commute.
That's perfectly valid, but we've been exploring an alternative view in this series. Here quantum theory doesn't subsume probability theory, but they intersect:
What goes in the middle you might ask? As odd as it might sound at first, John showed in Part 16 that electrical circuits made of resistors constitute the intersection!
For example, a circuit like this:
gives rise to a Hamiltonian
Oh—and you might think we made a mistake and wrote our Ω (ohm) symbols upside down. We didn't. It happens that ℧ is the symbol for a 'mho'—a unit of conductance that's the reciprocal of an ohm. Check out Part 16 for the details.
Let's recall how states, time evolution, symmetries and observables work in the two theories. Today we'll fix a basis for our vector space of states, and we'll assume it's finite-dimensional so that all vectors have
Vectors will be written as
Besides the configurations
• Stochastic states are
The probability of finding the system in the
or in the notation we're using in these articles:
where we define
• Quantum states are
The probability of finding a state in the
or in other words
where the inner product of two vectors
Now, the usual way to turn a quantum state
This is very unorthodox, but it lets us evolve the same vector
Time evolution works similarly in stochastic and quantum mechanics, but with a few big differences:
• In stochastic mechanics the state changes in time according to the master equation:
which has the solution
• In quantum mechanics the state changes in time according to Schrödinger's equation:
which has the solution
The operator
• We need
• We need
where we recall that the adjoint of a matrix is the conjugate of its transpose:
We are concerned with the case where the operator
•
As John explained in Part 12, besides states and observables we need symmetries, which are transformations that map states to states. These include the evolution operators which we only briefly discussed in the preceding subsection.
• A linear map
• A linear map
and
A notable difference here is that in our finite-dimensional situation, isometries are always invertible, but stochastic operators may not be! If
and we say
Puzzle 1. Suppose
It is quite hard for an operator to be a symmetry in both stochastic and quantum mechanics, especially in our finite-dimensional situation:
Puzzle 2. Suppose
'Observables' are real-valued quantities that can be measured, or predicted, given a specific theory.
• In quantum mechanics, an observable is given by a self-adjoint matrix
• In stochastic mechanics, an observable
We can turn an observable in stochastic mechanics into an observable in quantum mechanics by making a diagonal matrix whose diagonal entries are the numbers
Back in Part 16, John explained how a graph with positive numbers on its edges gives rise to a Hamiltonian in both quantum and stochastic mechanics—in other words, a Dirichlet operator.
Here's how this works. We'll consider simple graphs: graphs without arrows on their edges, with at most one edge from one vertex to another, and with no edge from a vertex to itself. And we'll only look at graphs with finitely many vertices and edges. We'll assume each edge is labelled by a positive number, like this:
If our graph has
How about stochastic mechanics? Remember that a Hamiltonian in stochastic mechanics needs to be 'infinitesimal stochastic'. So, its off-diagonal entries must be nonnegative, which is indeed true for our
But now comes the best news you've heard all day: we can improve
It's easy to check that
In Part 16, we saw a bit more: every Dirichlet operator arises this way. It's easy to see. You just take your Dirichlet operator and make a graph with one edge for each nonzero off-diagonal entry. Then you label the edge with this entry. So, Dirichlet operators are essentially the same as finite simple graphs with edges labelled by positive numbers.
Now, a simple graph can consist of many separate 'pieces', called components. Then there's no way for a particle hopping along the edges to get from one component to another, either in stochastic or quantum mechanics. So we might as well focus our attention on graphs with just one component. These graphs are called 'connected'. In other words:
Definition. A simple graph is connected if it is nonempty and there is a path of edges connecting any vertex to any other.
Our goal today is to understand more about Dirichlet operators coming from connected graphs. For this we need to learn the Perron–Frobenius theorem. But let's start with something easier.
In quantum mechanics it's good to think about observables that have positive expected values:
for every quantum state
Definition. An
for all
Similarly:
Definition. A vector
for all
We'll also define nonnegative matrices and vectors in the same way, replacing
In 1907, Perron proved the following fundamental result about positive matrices:
Perron's Theorem. Given a positive square matrix
In other words, if
The conclusions of Perron's theorem don't hold for matrices that are merely nonnegative. For example, these matrices
are nonnegative, but they violate lots of the conclusions of Perron's theorem.
Nonetheless, in 1912 Frobenius published an impressive generalization of Perron's result. In its strongest form, it doesn't apply to all nonnegative matrices; only to those that are 'irreducible'. So, let us define those.
We've seen how to build a matrix from a graph. Now we need to build a graph from a matrix! Suppose we have an
But watch out: this is a different kind of graph! It's a directed graph, meaning the edges have directions, there's at most one edge going from any vertex to any vertex, and we do allow an edge going from a vertex to itself. There's a stronger concept of 'connectivity' for these graphs:
Definition. A directed graph is strongly connected if there is a directed path of edges going from any vertex to any other vertex.
So, you have to be able to walk along edges from any vertex to any other vertex, but always following the direction of the edges! Using this idea we define irreducible matrices:
Definition. A nonnegative square matrix
Now we are ready to state:
The Perron-Frobenius Theorem. Given an irreducible nonnegative square matrix
The only conclusion of this theorem that's weaker than those of Perron's theorem is that there may be other eigenvalues with
Its Perron–Frobenius eigenvalue is 1, but it also has -1 as an eigenvalue. In general, Perron–Frobenius theory says quite a lot about the other eigenvalues on the circle
Perron–Frobenius theory is useful in many ways, from highbrow math to ranking football teams. We'll need it not just today but also later in this series. There are many books and other sources of information for those that want to take a closer look at this subject. If you're interested, you can search online or take a look at these:
I have not taken a look myself, but if anyone is interested and can read German, the original work appears here:
And, of course, there's this:
It's quite good.
Now comes the payoff. We saw how to get a Dirichlet operator
Unfortunately, the matrix
Luckily, we can fix this just by adding a big enough multiple of the identity matrix to
where
Since
Lemma. Suppose
Proof. The eigenvalues of
then
By the Perron–Frobenius theorem the number
But in fact we can improve this result, since the largest eigenvalue
Definition. A Dirichlet operator is irreducible if it comes from a connected finite simple graph with edges labelled by positive numbers.
This meshes nicely with our earlier definition of irreducibility for nonnegative matrices. Now:
Theorem. Suppose
Proof. Choose
Since
for all
so we must have
What's the point of all this? One point is that there's a unique stochastic state
There are many examples of irreducible Dirichlet operators. For instance, in Part 15 we talked about graph Laplacians. The Laplacian of a connected simple graph is always irreducible. But let us try a different sort of example, coming from the picture of the resistors we saw earlier:
Let's create a matrix
Remember how the game works. The matrix
We've set up this example so it's easy to see that the vector
So, this is the unique eigenvector for the eigenvalue 0. We can use Mathematica to calculate the remaining eigenvalues of
As we expect from our theorem, the largest real eigenvalue is 0. By design, the eigenstate associated to this eigenvalue is
(This funny notation for vectors is common in quantum mechanics, so don't worry about it.) All the other eigenvectors fail to be nonnegative, as predicted by the theorem. They are:
To compare the quantum and stochastic states, consider first
can be normalized to give stochastic state only if
And, it's easy to see that it works this way for any irreducible Dirichlet operator, thanks to our theorem. So, our thesis has been proved true!
Let us conclude with a couple more puzzles. There are lots of ways to characterize irreducible nonnegative matrices; we don't need to mention graphs. Here's one:
Puzzle 3. Let
You may be confused because today we explained the usual concept of irreducibility for nonnegative matrices, but also defined a concept of irreducibility for Dirichlet operators. Luckily there's no conflict: Dirichlet operators aren't nonnegative matrices, but if we add a big multiple of the identity to a Dirichlet operator it becomes a nonnegative matrix, and then:
Puzzle 4. Show that a Dirichlet operator
Irreducibility is also related to the nonexistence of interesting conserved quantities. In Part 11 we saw a version of Noether's Theorem for stochastic mechanics. Remember that an observable
Puzzle 5. Let
In fact this works more generally:
Puzzle 6. Let
You can also read comments on Azimuth, and make your own comments or ask questions there!
![]() |
|
![]() |