Title Good Morning. I'm going to be talking today about a wonderful analogy that relates many different fields of research. This is joint work with John Baez at UC Riverside. Original Rosetta Stone When Rome conquered Egypt, it insisted on latin for written communication and within a couple of century, no one knew how to read or write hieroglyphics or its shorthand version, demotic. Coptic was the language used by the Christian church in Egypt from as early as the fourth century A.D. It's a dialect of Egyptian written with the Greek alphabet, but it utilizes seven additional symbols from the demotic script in order to cater for an additional range of sounds used in egyptian speech. In the mid-1600's, a Jesuit named Kircher collected lots of Coptic manuscripts and produced a grammar. When the rosetta stone was discovered around a century and a half later, An English linguist named Young worked out that hieroglyphs set apart in boxes called cartouches had phonetic spellings of some foreign names, like Ptolemaios and Alexandrus, which was a surprise; for centuries everyone had assumed that hieroglyphics were purely pictograms. Then the French linguist Champollion used his knowledge of the demotic symbols from Coptic to work out what the rest of the demotic said, and then worked out the correspondence between demotic symbols and the hieroglyphs, showing that they're almost completely phonetic. We find ourselves in a similar situation--we see glimpses of relationships between fields and have identified a few points of correspondence, like Young and his cartouches, but it remains to work out the full translation. Pocket Version Here's an abbreviated translation table: Physical processes are to the state space of a physical system as cobordisms are to manifolds as proofs are to propositions as programs are to datatypes as morphisms are to objects Objects Every category has a collection of objects. - String diagrams are a small generalization of Feynman diagrams; you can think of this string as the worldline of a particle of type X if you'd like. - A Hilbert space of dimension n is a list of n complex numbers, thought of as coordinates in an abstract space. You can add lists and multiply lists by a complex number, so they form a vector space. You can also take the dot product on lists to find the angle between them in this space. We usually write such a list as a matrix with n rows and one column. - Manifolds are smooth spaces, like a circle, torus, or hypersphere; they have no corners or edges. - Linear logic is logic where you can only use any given proposition once, so you can think of propositions as being declarations of resources available. - The data types we have in mind are like Java interfaces or abstract classes: they are just an API, although we can specify tests that have to pass. - The category of all sets is denoted SET in caps. Morphisms Given any two objects X and Y in a category, there's a set of morphisms from X to Y. Depending on the category, we denote an element of this set in different ways: - A string diagram has a labelled vertex where the string's label changes from X to Y - Linear transformations map between Hilbert spaces; you can think of these as m by n matrices with complex entries. - Manifolds don't have edges, but cobordisms do; the "borders" of a cobordism are the source and target manifold. Here, we have a pair of pants whose source is a circle X and whose target is a pair of circles Y. - Usually logicians are only interested in provability rather than which particular proof is used. We write this if you can derive Y from X. In linear logic, you can think of this as a process that consumes an item of kind X and produces one of kind Y. - In Java, we denote methods like this, - and when dealing with sets, we denote functions like this. Morphisms compose associatively - we stick string diagrams end-to-end - we multiply matrices - we stick cobordisms end-to-end - we have the cut rule in logic that lets us take two proofs and use them in sequence to get a new proof - we can compose methods and functions Identity morphisms Every object in a category has an associated identity morphism - We don't bother to draw the identity vertex in string diagrams - we have identity matrices to act on Hilbert spaces - Here's the identity cobordism on a circle: it's topologically the same as the circle itself. - In logic, we automatically get that we can derive X from X. - Here's the identity method on X - and the identity function on X Monoidal categories All of these categories are monoidal, which means that you can take two objects and put them together to form a new object, which means you can also put together two morphisms so they act in parallel on the pair of objects. This operation is denoted with the tensor product, which is cool because it looks like the X-men symbol. - We denote it in string diagrams either as two parallel strings, or as a vertex with two inputs and two outputs, or as a vertex with a single "pair-type" input and output. - The tensor product of matrices takes a copy of the second matrix for each element of the first one and multiplies them (point at it). The dimensions multiply: mxn tensor rxs = mr x ns - Disjoint union of manifolds and cobordisms - In logic, we say X and X'. - Java doesn't have it natively, but we can imagine a variant where we can declare a "parallel data type" where the first part lives on one computer and the second part lives on another. Then methods mapping between these would have parts that run in parallel. - In the category of sets we have the cartesian product. Note that while we can use projection maps to "take apart" a pair of functions, we can't do that with matrices: there's no way to write the list (1, 0, 0, 1) as the tensor product of two lists. This is called entanglement in quantum mechanics. Similarly, we can't split an arbitrary cobordism from two circles to two circles into two pieces (go back four pages to the composition picture) Monoidal unit In each case there's an object called the monoidal unit, denoted I, such that (X tensor I) and (I tensor X) are both isomorphic to X: - We don't even draw it for string diagrams. You might call it the "empty string" - I=the one-by-one matrix; when you tensor with it, the dimensions stay the same - the empty manifold - trivial proposition: you always have at least nothing - In Java, it's the void type; in other languages like Haskell or ML, it's called the unit type. - The cartesian product of a set X with the one-element set is isomorphic to X. Braided monoidal categories - We can braid strings. Here Y is going over X, which is different from X going over Y. - When you swap identical particles in more than 2 space dimensions, there are only two things that can happen: the wave function stays the same (in which case we call the particles bosons) or the wave function changes sign (in which case we call them fermions). But if you restrict particles to 2 dimensions, then other possibilities arise. If you take a thin conductive film, put it in a strong magnetic field, and drop the temperature down to near absolute zero, then the flux tubes get quantized and you can braid them: the change in the phase of the wave function depends on the handedness of the swap and the strength of the field. It turns out that you can change the phase of the wavefunction by any complex number instead of just +/-1, so they call these things "anyons". - you can braid cobordisms - If you have X and Y, then you have Y and X - this makes the parallel processors trade their data - you can permute elements of a pair, but you can't braid them: SET is a symmetric monoidal category. Braided monoidal closed categories "Closed" means that for any pair of objects X, Y, there's an object Z such that the set of morphisms from X to Y is isomorphic to the set of morphisms from I to Z. Z is usually denoted X -o Y. In many cases, Z is "backwards X" tensor Y, so we can think of this map as enabling us to "bend" morphisms so that an input becomes a backwards output. Even when this isn't the case, it's still surprisingly useful to think of it that way, so - in string diagrams we have this little "chain" decoration that says that we have to treat these two wires as a single one. - in quantum mechanics, a "backwards" particle is an antiparticle, so we can take the identity on this electron's state and "bend" it so that now we get a photon splitting into a positron-electron pair. (Actually, I'm cheating a little. This actually only applies to the internal symmetries of the particle system; otherwise particles would annihilate and leave *nothing*, not photons.) The corresponding operation on the lists of complex numbers is the complex conjugate. - Again, here we see an identity morphism on the circle being bent into a morphism from the empty manifold to a pair of circles. If we can orient the manifold at the top (say, put an arrow on this circle), then when we bend over the morphism, the orientation has swapped, so it's a "backwards output". - In logic, we have implication: if from X and Y we can derive Z, then from Y we can derive that X implies Z. - In computer science, this operation is called "currying": consider the function plus, which takes two inputs, x+y=z. Notice that for each y, we can produce a function "+y" that takes x as its single input and produces z. Call that function of y, "curried plus". Curried plus is a function from y to x hom z, and in some sense, x is a "backwards output" even though you can't act on the x part all by itself. You can curry in Java, but it's horribly complicated. This notation here is used by nearly every other language that supports it, like Scheme or JavaScript. - In the category of sets, you have a *set* of morphisms from X to Z, so it doesn't need any extra structure to be an object. It's usually denoted with exponential notation because for functions between finite sets, that's how many there are. Model Theory Here's a small category with two objects and two nontrivial morphisms. (I haven't drawn the identity morphisms.) At first glance, it's not a particularly interesting category; but if I change the names here, [next slide] we see it has something to do with graphs. We call this category T.H.Graph for "the theory of graphs". [next slide] We call it that because if you map the objects of this category to finite sets and the morphisms of the category to functions between those sets, then we get the description of a particular graph. It's pretty easy to see that any such assignment will give a graph, and all graphs arise this way. This is a simple example of the notion of syntax and semantics. On the left, we have the syntax, the API, the abstract description of the mathematical gadget we're trying to describe. Its objects are simply placeholders, without any implicit meaning. On the right, we have the semantics; here is where the symbols acquire meaning; in this instance, V now means a set of vertices. A map between categories is called a functor. A functor maps objects to objects and morphisms to morphisms such that composition and identities are preserved. Functors can also preserve more structure, like the tensor product and closedness. Model Theory (Java) The syntax of a structure in Java is described by an interface. Model Theory (Classical) Implementations of this interface are functors from this category to the category of computably enumerable sets and partially recursive functions. But we can replace either side of this table with a different category. Model Theory (Quantum) For instance, we can replace the right side by the category of Hilbert spaces and linear transformations. The functors between these two categories are quantum algorithms. [next slide] We can also replace the left-hand side; by choosing to use a cobordism category, we get a model of quantum gravity. Rather than call the objects and morphisms on the left manifolds and cobordisms, we can call them n-dimensional space and n+1-dimensional spacetime. We assign a Hilbert space of states to each region of space and a linear transformation to each region of spacetime connecting them telling how the state evolves over time. Such an assignment is called a topological quantum field theory. TQFT's are just a toy model of quantum gravity---there's no matter, just spacetime!---but they also describe the anyons I mentioned earlier. The phase of those anyons is a complex number, and multiplication of complex numbers is commutative: it doesn't matter what order the factors appear. Microsoft's Project Q is trying to design a system with noncommutative anyons; Wang et al have shown that if that's possible, then it can be used for universal quantum computation. Topological quantum computation's interesting, because perturbations are unlikely to change the topology of the system, so it'll be robust against decoherence. Almost any pairing is likely to produce interesting mathematics of this depth. And if you can think of another braided monoidal closed category, you open up a new field of research for each pairing with any of these. This rosetta stone helps us figure out where to look for interesting results: just check if your favorite theorem applies when you map it across to another column.