Categorical semantics was born in Lawvere's celebrated 1963 thesis on algebraic theories:
The role of semantics enters when we consider "models" of an algebraic theory. Loosely speaking, a model is just one of the things the theory seeks to describe. For example, a "model" of Th(Grp) is just a group. Technically, an algebraic theory T is a category with finite products, and a model is a functor that preserves finite products:
from T to the category of sets. The basic idea is simple: if for example T = Th(Grp), then Z maps the abstract concept of "group" to a specific set, the abstract concept of "multiplication" to a specific multiplication on the chosen set, and so on, thus picking out a specific group.
Dual to the concept of semantics is the concept of syntax, which deals with symbol manipulation. Just as semantics deals with models, syntax deals with "proofs". For example, starting from Th(Grp) we can prove consequences of the group axioms merely by juggling equations. In the case of algebraic theories, the syntax often goes by the name of universal algebra:
The relevance of all this to computer science becomes visible when we note that a proof in Th(Grp), or indeed in any algebraic theory, can be seen as a rudimentary form of computation. The "input" of the computation is a set of assumptions, while the "output" is the equation to be proved.
Treating proofs as computations may seem strained, but it becomes less so when we move to richer formalisms which allow for more complex logical reasoning. One of the most well-known is the lambda calculus, invented by Church and Kleene in the 1930s as a model of computation. Any function computable by the lambda calculus is also computable by a Turing machine, and according to the Church-Turing thesis these are all the functions computable by any sort of systematic process. Moreover, computations in the lambda calculus can actually be seen as proofs.
The aptness of this way of thinking was brought out in Landin's classic paper:
If this is befuddling, take our category to be Set and take HOM = hom; then the above equation says:
CCCs just generalize this idea to other categories! In the category Set we often write the object hom(a,b) as an exponential b^{a}, and people also do this for other CCCs."a function from x × a to b is the same as a function from x to the set of functions from a to b."
Lambek's discovery introduced a semantics for the lambda calculus, since it lets us to speak of models of theories formulated in the lambda calculus:
just as we could for algebraic theories. These are again just functors that preserve finite products. In computer programming, the importance of a model is that it gives a picture of what a program actually accomplishes. A model Z sends any program to an actual function between sets.
There's no way to list all the interesting references to CCCs and the λ-calculus, but here are some great online places to get started, starting out easy and working up to the harder ones:
For more discussion of these concepts visit the n-Category Café and check out my course notes.