# Lecture 6 - Computing Adjoints

I've already said that left and right adjoints give *the best approximate ways to solve a problem that has no solution*, namely finding the inverse of a monotone function that has no inverse. I've defined them and given you some puzzles about them. But now let's review these puzzles and extract some valuable lessons!

We took the function \(f : \mathbb{N} \to \mathbb{N}\) that doubles any natural number

[ f(a) = 2a . ]

This function has no inverse, since you can't divide an odd number by 2 and get a natural number! But if you did the puzzles, you saw that \(f\) has a "right adjoint" \(g : \mathbb{N} \to \mathbb{N}\). This is defined by the property

[ f(a) \le b \textrm{ if and only if } a \le g(b) . ]

or in other words,

[ 2a \le b \textrm{ if and only if } a \le g(b) . ]

Using our knowledge of fractions, we have

[ 2a \le b \textrm{ if and only if } a \le b/2 ]

but since \(a\) is a natural number, this implies

[ 2a \le b \textrm{ if and only if } a \le \lfloor b/2 \rfloor ]

where we are using the floor function to pick out the largest integer \(\le b/2\). So,

[ g(b) = \lfloor b/2 \rfloor. ]

Moral: the right adjoint \(g \) is the "best approximation from below" to the nonexistent inverse of \(f\).

If you did the puzzles, you also saw that \(f\) has a left adjoint! This is the "best approximation from above" to the nonexistent inverse of \(f\): it gives you the smallest integer that's \(\ge n/2\).

So, while \(f\) has no inverse, it has two "approximate inverses". The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's *too small*. The right adjoint comes as close as possible while making sure to never choose a number that's *too big*.

The two adjoints represent two opposing philosophies of life: *make sure you never ask for too little* and *make sure you never ask for too much*. This is why they're philosophically profound. But the great thing is that they are defined in a completely precise, systematic way that applies to a huge number of situations!

If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious".

Let's think a bit more about how we can compute them in general, starting from the basic definition.

Here's the definition again. Suppose we have two preorders \((A,\le_A)\) and \((B,\le_B)\) and a monotone function \(f : A \to B\). Then we say a monotone function \(g: B \to A\) is a **right adjoint of \(f\)** if

[ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) ]

for all \(a \in A\) and \(b \in B\). In this situation we also say that \(f\) is a **left adjoint of \(g\)**.

The names should be easy to remember, since \(f\) shows up on the *left* of the inequality \( f(a) \le_B b \), while \(g\) shows up on the *right* of the inequality \( a \le_A g(b) \). But let's see how they actually work!

Suppose you know \(f : A \to B\) and you're trying to figure out its right adjoint \(g: B \to A\). Say you're trying to figure out \(g(b)\). You don't know what it is, but you know

[ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) ]

So, you go around looking at choices of \(a \in A\). For each one you compute \(f(a)\). If \(f(a) \le_B b\), then you know \(a \le_A g(b)\). So, you need to choose \(g(b)\) to be greater than or equal to every element of this set:

[ \{a \in A : \; f(a) \le_B b \} ]

In other words, \(g(b)\) must be an **upper bound** of this set. But you shou'ldn't choose \(g(b)\) to be any bigger that it needs to be! After all, you know \(a \le_A g(b)\) *only if* \(f(a) \le_B b\). So,
\(g(b)\) must be a **least upper bound** of the above set.

Note that I'm carefully speaking about *a* least upper bound. Our set could have two different least upper bounds, say \(a\) and \(a'\). Since they're both the least, we must have \(a \le a'\) and \(a' \le a\). This doesn't imply \(a = a'\), in general! But it does if our preorder \(A\) is a "poset". A **poset** is a preorder \((A, \le_A)\) obeying this extra axiom:

[ \textrm{ if } a \le a' \textrm{ and } a' \le a \textrm{ then } a = a' ]

for all \(a,a' \in A\).

In a poset, our desired least upper bound may still not *exist*. But if it does, it's *unique*, and Fong and Spivak write it this way:

[ \bigvee \{a \in A : \; f(a) \le_B b \} ]

The \(\bigvee\) symbol stands for "least upper bound", also known as **supremum** or **join**.

So, here's what we've shown:

If \(f : A \to B\) has a right adjoint \(g : B \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

[ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . ]

And we can copy our whole line of reasoning and show this:

If \(g : B \to A\) has a left adjoint \(f : A \to B\) and \(B\) is a poset, this left adjoint is unique and we have a formula for it:

[ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} . ]

Here the \(\bigwedge\) symbol stands for "greatest lower bound", also known as the **infimum** or **meet**.

We're making progress: we can now actually compute left and right adjoints! Next we'll start looking at more examples.

**To read other lectures go here.**