# Dependent types and pullbacks

Dependent types are conventionally presented as functions such as:

``X → Type``

For example, a value whose type depends on a boolean value might be represented as:

``f : 2 → Type``

I figured few weeks ago that it'd be helpful if you can find a point in some model category that represents the point formed by a dependent type. I thought it might even be necessary to model dependent types. In the post I suggested there are inverted representations for dependent types.

For boolean it's fairly trivial. The boolean must be `1+1`. There's only one function to produce `1`, that's `const`. To form `+` you can start with the expansion and fill it in.

``alt ∘ (inl ∘ const + inr ∘ const) : X+Y → 2``

If the conversion is decomposible to pieces and it is possible for the whole language then we could represent any dependent type with some simple type and a function that gives the index.

To keep the type same, it's index should stay same between morphisms. This means there's an identity between two morphisms and it's fairly easy to express in category theory, take some morphism `m`. We need some way to represent they're pairs of points and morphisms, to do that lets say `[f]a` represents a morphism `f` going off from the point `a`.

``````m : [f]a → [g]b
````f = g ∘ m````

This condition forms a slice category built over some point `c` where the point-morphisms converge. If you need some visualization, think of a pinboard.

We could pin morphisms to booleans and write it down like this:

``C/2``

Switching between different slice categories would correspond to changing the antecedent in a sequent calculus system.

You may see it resembles the assignment axiom schema in hoare logic. You can always "walk backwards" without breaking things.

Also, is it far away from being a groupoid? Inversion is missing and we aren't using functors as morphisms.

Now lets introduce some ideas so that we get dependent products and sums.

## Fibered product

Pullback or a fibered product resembles a product, but it forms a bifunctor from slice categories. I would guess it's a right adjoint with some functor with the signature `C → C/X * C/X`.

So we could write a pullback as:

``[f]X *Z* [g]Y``

It has natural transformations to form a pair and get projections. And there's a requirement that the following holds:

``f ∘ fst = g ∘ snd``

This fibered product takes a similar role to cartesian product in cartesian closed categories.

Note that the slice is built in ordinary category which means that you have a fine bifunctor, but recall it's not pointing to the `C` -category and the lifting from `C` to `C/X` -category needs to match stuff up.

## Pullback functor

Pullback functor is a functor taking `[k]X` to `[snd]([k]X *Y* [f]Z)`. It's a functor from `C/Y` to `C/Z`. The `f : Z → Y` is any morphism.

If you take right adjoint and left adjoint from pullback functor, you get the dependent products and sums.

For the dependent product you get the unit/counit:

``````unit : [k]X → ∏{[f]Z} [snd]([k]X *Y* [f]Z)
````counit : [snd]((∏{[f]Z} [k]Y) *A* [f]Z) → [k]Y````

For the dependent sum you get:

``````unit : [k]X → [snd]((Σ{[f]Z} [k]X) *Y* [f]Z)
````counit : (Σ{[f]Z} [snd]([k]X *Y* [f]Z)) → [k]X````

Plus to form adjoints these natural transformations must have the usual identities associated to adjoints.

These definitions seem insane, but they're consistent. They appear inside slice category so you may be wondering what's the morphism of a dependent sum or product? Well, it's the composition of `f` with the morphism inside sum/product.

``````Σ{[f]Z} [g]X --> f ∘ g
````∏{[f]Z} [g]X --> f ∘ g````

So you may be wondering how are these things representing products and sum. They seem nonsensical at first, but it turns out to be fairly simple. To say "for every boolean, P holds", you could write:

``∏{[const]2} [f]P``

You might think that the fibered product works as equality type. It doesn't work for that use because you can't use a function to represent that the same value appears on both side of a product. You need the equalizer.

## Equalizers/Coequalizers

Equalizers are objects similar to products or fibered products, except that they identify morphisms that make two morphisms equal. We could represent them like this:

``[f] = [g] X Y``

Instead of being a product, there's `eq` -morphism that retrieves the equalizing morphism.

A theorem that some operator is commutative can now be expressed as a point:

``∏{[const](A*A)} [eq]([f] = [g] A A)``

There's also an inverse idea that's called coequalizer. They are likely useful in their own context, but there's an immediate context where they may come interesting.

Algorithm to search for a coequalizer for pair of morphisms is an unification algorithm. It's represented as treating substitutions as morphisms between different variable sets.

Consider unification of the following substitutions:

``````{x ↦ f(k)}
````{x ↦ f(g(j))}````

The result is a substitution that makes them equal, and it can be substituted to become any other substitution that is equal. Therefore it is:

``{k ↦ g(z), j ↦ z}``

The forming language forms a good basis for studying inference rules of logic programming languages.

That's cool, but you may be wondering, would the equalizer work as the equality type? For it to be useful we should be able to eliminate options.

## Leibniz equality

There's a notion of what equality means in a formal system. It's stated as:

``````f = g,
``````iff given any h,
````h ∘ f = h ∘ g````

And this is also reflected by coequalizer/equalizer in how they behave. Now if you consider an equality type such as:

``[inl] = [inr] 2 1``

Take two morphisms `a`, `b`, such that they are not equal. Now you can apply the earlier rule.

``[alt [a] [b] ∘ inl] = [alt [a] [b] ∘ inr] 2 1``

And we get a proposition that inequal points are equal.

``[a] = [b] 2 1``

Now you can conclude that since the morphisms `a` `b` are not equal, the `inl/inr` are not equal either.

``[inl] = [inr] 2 1 → 0``

This concludes that the notion of equality and dependent typing introduced in this way likely could end up being useful.

I've thought that this same model would not work with linear logic but maybe it does.

The representation of logic as sequents has the problem that it relies on copying.

For example, if you needed to track file offsets:

``offset fd = offset fd + 4``

You quite can't do it this way because the type signature for retrieving the offset is likely like this:

``offset : File ⊸ Int ⊗ File``

Even if we treated `File` as an interactive element, it's still likely possible to treat it with the same rules of equality.

Here's a small puzzle here. If we wanted to state that some operation writes into the file. We can't state it's the same file because otherwise we can remove all write operations from the program:

``f ∘ file = f ∘ write ∘ (file ⊗ item), for any 'f'``

Fortunately it'd seem simple to remove something from the slice. An alternative would be to use the slice `Int ⊗ 1`. Linear logic still has the terminal object, but it has behavior that resembles the initial object.

So how would you represent that some operation increments a file handle? Well, likely with the following slice:

``Int ⊗ (FileId ⊗ 1)``

The morphism in that slice would be:

``````  [(id⊗((id⊗const) ∘ ref)) ∘ offset]File
````⊸ [(increment⊗((id⊗const) ∘ ref)) ∘ offset]File````

Now this thing could do lots of things, but it wouldn't switch file handles and you'd know how it manipulates the offset.

## Conclusion

It seems that slice categories can be used to model dependent types and that interpretation would even work with linear types.