After watching halfway to Bartosz Milewski's video lectures I've understood adjoint functors, to the point that it's getting useful.
I can likely use all of this for programming language design.
Adjunction between categories
is a pair of functors
such that there is a following bijection between morphism sets.
C(FY, X) ~= D(Y, GX)
If you fiddle this bijection you find out it also means the following.
FY, this is allowed as they're representing objects
or morphisms of same category, you get the following relation between
identity morphisms of the
C category and morphisms of the
C(FY, FY) ~= D(Y, GFY)
And if you do the same thing on the other side and substitute
you get the following:
C(FGX, X) ~= D(GX, GX)
We also get the corresponding natural transformations between functors:
counit : F.G --> Id_C
unit : Id_D --> G.F
For example with unit, this means that for every morphism
f : D(X, Y),
unit(Y) . Id_D(f) = (G.F)(f) . unit(X)
That means, we can apply the
f first and then unit transform the result,
or we can obtain the
G.F transform and apply it to the unit.
Maybe-type in Haskell, it means the following holds:
(return . f) = (fmap f . return)
And you can therefore know the following expressions are equivalent:
(return . (+1)) 3 :: Maybe Int
(fmap (+1) . return) 3 :: Maybe Int
This happens because monads have multiple corresponding adjunct functors
that can be produced mechanically by building monad
unit and monad
join operator from
Universal mapping properties
Universal mapping properties are constructs that may be identified in categories.
For example in a category corresponding to intuitionistic logic, we have products and exponent objects.
Product object is any object
k that has an unique morphism
(that means there's only one morphism like this that we can find)
from any other object, such that following diagram commutes with any
g that you can pick:
/ | \
f/ |h \g
/ | \
V V V
And we know that the
h can be identified as product type
f ∧ g.
We can transform any proof
a → b into
(a ∧ d) → (b ∧ d).
Also we can always factorize the pair construction away
when it happens with either one of the elimination forms.
fst (a ∧ b) = a.
Expressive type declarations
This also seem to extend over to types and we can represent simple algebraic type declarations in point-free style.
f ∧ g : A → B ∧ C
f : A → B
g : A → C
We get functors from these type declarations. Instead of passing a type into 'A' we may also pass a function.
There are likely some details and things I don't get right here yet.
There are universal mapping properties that can be represented as adjunction. For example the aforementioned product pattern is a adjunction between any category and a category constructed by "duplicating" it:
(C * C) <--> C
We have a trivial left functor that just produces
<a,a> for any
It does that for functions likewise.
Now we can state that such trivial left functor produces an adjunction when you have right functor that forms products by collapsing the duplicate category back to category it duplicated.
dup : C → C * C
prod : C * C → C
Since the category is duplicated on the left side, we find out this construct defines 3 morphism and relates them. Once we name them, we get the following rules going:
prod . dup : a → a ∧ a
<fst, snd> : (a∧b * c∧d) → (a * d)
<fst, snd> . dup . prod --> Id_CxC
Id_C --> prod . dup
And we can name
prod.dup as a
pair = prod.dup
So we get to identify what are products and how do they behave when they appear. This forms a lot of behavior that's resembling to typeclasses in Haskell.
Likewise we can build upon this.
Get a category
C that has products,
then identify there's a functor,
that introduces some object along the item it transforms:
introduce c : a → a ∧ c
Functor producing exponentials is a right adjoint with this functor.
exp c : a * c → a^c
Now we can again identify new structures and name them:
exp c . introduce c : a → a^c
curry c = exp c . introduce c
The counit is bit harder to represent because it transforms something into identity again, but we can name it 'eval'.
eval : (a^c) ∧ a → a
And we again get information that any identity morphism
can be turned into
eval . curry c rewrites to identity.
To get an example of how this behaves when we don't find an adjoint,
lets look for the left adjoint for the
G = introduce c : a → a ∧ c
F = ?? : a ∧ c → a
So we'd have to find a bijection like this:
C(F Y, X) ~= D(Y, (introduce c) X)
We'd have a way to transform any identity
(F Y) ∧ c.
F (X ∧ c) would transform into an identity of
This seems like an impossible construct in intuitionistic logic
as it doesn't have a way to take arbitrary
and then introduce type
c along with it.
It might work in linear logic when product is replaced
with multiplicative product,
because we cannot eliminate
we've retrieved the type
X ⊗ c inside
I'm not sure what kind of universal property it'd correspond to.
Monads/Comonads from adjuctions
Adjunctions introduce potential ways to construct monads and comonads. The counit/unit are respective items in the monad being constructed.
counit : F.G --> Id_C
unit : Id_D --> G.F
There's a simple way to construct
join : G.F.G.F -> G.F
duplicate : F.G -> F.G.F.G.
join = G counit
duplicate = F unit
That's an another interesting property.
You know that duplication and product forms an adjoint.
unit : Id_C --> prod.dup
counit : dup.prod --> Id_CxC
This produces some interesting monad.
The unit for the monad duplicates an object.
The join takes in
((a,a),(a,a)) and produces
So it'd be a monad that operates on pairs of same type at once.
Comonad forms into product category so I'm not sure it can be used directly.
The counit would be
((a,b) * (c,d)) -> (a * d)
and duplicate would do
(a,b) → ((a,a), (b,b)).
If you do the same with the exponential object,
it's a bit easier because the left adjoint is a product.
The produced monad is the state monad
s → (a,s).
The comonad is
(s → a, s), it has eval as 'counit',
(s → a, s) → (s → (s → a,s), s).
I've read about this one but didn't outright recognize it.
It's a notion of a store that has some reading position or "point",
associated to it.
Immediate use of category theory would seem to be more concise way to define things and build interfaces that neatly fit and compose together in different ways.
Understanding of monads through kleisli category would seem to help when using monads in Haskell.
Functor-representations for type declarations would seem to be immediately useful as-it. It also emphasizes an interesting question.
Most dependently typed languages use lambda-based type declarations.
Eg. Idris uses
(x:A) -> B.
What would be the correct point-free way to describe this type?
For a function
A → B, we have a profunctor.
Technically we could construct the following to construct a function type:
Function : a → (() → Type) → (a → Type) → Type
But now we can't use the environment variable when filling the argument type. This means that we can't translate all types into point-free forms.
I watch the rest of the lectures before I continue with my projects. This is probably going to take me few weeks.