# Linear continuations and classical linear logic

I didn't understand Andrzej Filinski's "Linear Continuations" at the first time stumbling on it, but I finally do understand most of that paper. The paper led me to examining how *-autonomous categories models classical linear logic.

First take a symmetric monoidal closed category.
It's a category that comes with a tensor product `(⊗)`

and following natural transformations:

`insr a : a → a ⊗ I`

`delr a : a ⊗ I → a`

`exch a b : a ⊗ b → b ⊗ a`

`assl a b c : a ⊗ (b ⊗ c) → (a ⊗ b) ⊗ c`

`ap a b : (a ^ b) ⊗ b → a`

`cur a b : a → (a ⊗ b) ^ b`

And we may use them to introduce these:

`isnl a : a → I ⊗ a`

`dell a : I ⊗ a → a`

`assr a b c : (a ⊗ b) ⊗ c → a ⊗ (b ⊗ c)`

With the following laws:

`dell a ∘ exch a I ∘ insr a = delr a ∘ insr a`

`assr b c a ∘ exch a (b ⊗ c) ∘ assr a b c`

`= (b ⊗ exch a c) ∘ assr b a c ∘ (exch a b ⊗ c)`

`exch b a ∘ exch a b = a ⊗ b`

`ap (a ⊗ b) b ∘ (cur a b ⊗ b) = (a ⊗ b)`

`(ap a b ^ b) ∘ cur (a^b) b = a ^ b`

For a programmer the interesting thing about this might be that it forms a small concatenative programming language. But it's fairly barren. Lets give it sums (coproduct) and bangs (a comonad of some sort?):

`inl a b : a → a + b`

`inr a b : b → a + b`

`alt a : a + a → a`

`dupl a : a → !a { cp a }`

`extr a : !a → a`

`copy a : !a → !a ⊗ !a`

`drop a : !a → I`

And here's the condition for when the duplication is accepted.

`cp I`

`∀x y. cp (x ⊗ y) ← cp x ∧ cp y`

`∀x y. cp (x + y) ← cp x ∧ cp y`

`∀x. cp !x`

`∀x. cp x** ← cp x (explained later)`

After these additions it's still quite barren, but at least you can now copy things and represent choices.

Next if we add a dualizing object `I*`

,
it makes this category a *-autonomous category.

The dualizing object turns
the natural transformation `a → I* ^ (I* ^ a)`

into a natural isomorphism.

`callcc a : I* ^ (I* ^ a) → a`

`callcc a`

`∘ (ap I* a ∘ exch a (I* ^ a)) ^ (I* ^ a)`

`∘ cur a (I* ^ a)`

`= a`

`(ap I* a ∘ exch a (I* ^ a)) ^ (I* ^ a)`

`∘ cur a (I* ^ a)`

`∘ callcc a`

`= I* ^ (I* ^ a)`

Yup, it's the call-with-current-continuation.
The notation for callcc can be shortened a bit,
by recognizing that `a* = I* ^ a`

.

`callcc a : a** → a`

Now we can flip every morphism upside down:

`(insr a)* : a* → (a ⊗ I)*`

`(delr a)* : (a ⊗ I)* → a*`

`(exch a b)* : (a ⊗ b)* → (b ⊗ a)*`

`(assl a b c)* : (a ⊗ (b ⊗ c))* → ((a ⊗ b) ⊗ c)*`

`(ap a b)* : ((a ^ b) ⊗ b)* → a*`

`(cur a b)* : a* → ((a ⊗ b) ^ b)*`

`(inl a b)* : (a + b)* → a*`

`(inr a b)* : (a + b)* → b*`

`(alt a)* : a* → (a + a)*`

`(dupl a)* : (!a)* → a* { cp a }`

`(extr a)* : a* → (!a)*`

`(copy a)* : (!a ⊗ !a)* → (!a)*`

`(drop a)* : I* → (!a)*`

`(callcc a)* : a* → a***`

It reveals classical linear logic. We can introduce the following "view" to make it more obvious:

`(a ⊗ b)* = a* ⅋ b*`

`(a ^ b)* = a* % b*`

`(a + b)* = a* & b*`

`(!a)* = ?(a*)`

And if we look at the inverted rules again, it reveals classical linear logic. Though in a bit weird way because there's inversion everywhere.

`(insr a)* : a* → a* ⅋ I*`

`(delr a)* : a* ⅋ I* → a*`

`(exch a b)* : a* ⅋ b* → b* ⅋ a*`

`(assl a b c)* : a* ⅋ (b* ⅋ c*) → (a* ⅋ b*) ⅋ c*`

`(ap a b)* : ((a* % b*) ⅋ b* → a*`

`(cur a b)* : a* → (a* ⅋ b*) % b*`

`(inl a b)* : a* & b* → a*`

`(inr a b)* : a* & b* → b*`

`(alt a)* : a* → a* & a*`

`(dupl a)* : ?(a*) → a* { cp a }`

`(extr a)* : a* → ?(a*)`

`(copy a)* : ?(a*) ⅋ ?(a*) → ?(a*)`

`(drop a)* : I* → ?(a*)`

`(callcc a)* : a* → a***`

But it's still possible to obtain "clean" rules for additives:

`pair a = (alt a*)* ∘ (ap I* a ∘ exch a (I* ^ a)) ^ (I* ^ a)`

`: a → a** & b**`

`fst a b = callcc a ∘ (inl a* b*)*`

`: a** & b** → a`

`snd a b = callcc a ∘ (inr a* b*)*`

`: a** & b** → b`

The weirdness can be explained through the earlier "view" rules.
We really only have certain connectives,
and we're really not able to represent `a & b`

,
but we're able to represent `a** & b**`

and that's isomorphic to it
because `a ≃ a**`

.

This could be hidden into the "view".

`(a* ⊗ b*)* = a ⅋ b`

`(a* ^ b*)* = a % b`

`(a* + b*)* = a & b`

`(!a*)* = ?a`

`(insr a*)* : a** → (a* ⊗ I)*`

`(delr a*)* : (a* ⊗ I)* → a**`

`(exch a* b*)* : a ⅋ b → b ⅋ a`

`(assl a* b* c*)* : a** ⅋ (b ⅋ c) → (a ⅋ b) ⅋ c**`

`(ap a* b*)* : ((a* ^ b*) ⊗ b*)* → a**`

`(cur a* b*)* : a** → ((a* ⊗ b*) ^ b*)*`

`(inl a* b*)* : a & b → a**`

`(inr a* b*)* : a & b → b**`

`(alt a*)* : a** → a & a`

`(dupl a*)* : ?a → a** { cp a* }`

`(extr a*)* : a** → ?a`

`(copy a*)* : ?a ⅋ ?a → ?a`

`(drop a*)* : I* → ?a`

`(callcc a*)* : a** → a****`

The structures are close to being symmetric.
We could apply `a ≃ a**`

here and there to finish it.
Though there's still one oddly structured thing here.

I've understood the `a ^ b`

is isomorphic to `(a ⅋ b*)`

,
if that's so, we should be able to transform
the `ap`

and `cur`

to reflect that.

So lets see if we find these:

`cur_cl : a → (a ⊗ b) ⅋ b*`

`ap_cl : (a ⅋ b*) ⊗ b → a`

First expand the structures:

`cur_cl : a → ((a ⊗ b)* ⊗ b**)*`

`ap_cl : (a* ⊗ b**)* ⊗ b → a`

The `ap_cl`

is easy.
First take `a** → a`

and construct the first function,
you get to fill a hole:

`ap_cl = callcc a ∘ (_ ^ a*) ∘ cur ((a* ⊗ b**)* ⊗ b) a*`

`_ : ((a* ⊗ b**)* ⊗ b) ⊗ a* → I*`

Next we can do the following steps:

`((a* ⊗ b**)* ⊗ (b ⊗ a*) {assr}`

`((a* ⊗ b**)* ⊗ (b** ⊗ a*) {opposite of callcc}`

`((a* ⊗ b**)* ⊗ (a* ⊗ b**)) {exch}`

`I* {ap}`

This worked because we can hide the closure into the continuation. Ok, how about going in the other direction?

`cur_cl : a → ((a ⊗ b)* ⊗ b**)*`

We can probably start with trying to use `cur`

:

`a ⊗ ((a ⊗ b)* ⊗ b**) → I*`

Oh okay, this is easy again.

`((a ⊗ b)* ⊗ b**) ⊗ a {exch}`

`(a ⊗ b)* ⊗ (b** ⊗ a) {assr}`

`(a ⊗ b)* ⊗ (b ⊗ a) {callcc}`

`(a ⊗ b)* ⊗ (a ⊗ b) {exch}`

`I* {ap}`

Since we can construct the connectives, it means the exponential object can be hidden into the connectives.

With classical linear logic we can work with types that are in "negation normal form". There's maybe no benefit, but I like the symmetry in inference rules and types forming functors are easier to recognize with classical linear logic.

The categorical model built here provides a ridiculously simple method to evaluate programs represented with these combinators.

Introduce a semantic layer with continuations.

`data Sem r`

`= Nil`

`| Box (Cont r (Sem r))`

`| Cons (Cont r (Sem r)) (Cont r (Sem r))`

`| Fun (Cont r (Sem r) -> Cont r (Sem r))`

`| X (Sem r)`

`| Y (Sem r)`

Then describe the evaluator, as a step-through of each rule.

`class Eval r where`

`delr :: Cont r (Sem r) -> Cont r (Sem r) -> Cont r (Sem r)`

`eval :: Eval r => NT -> Sem r -> Cont r (Sem r)`

`eval (Id _) x = pure x`

`eval (Nop) Nil = pure Nil`

`eval (Insr _) x = pure (Cons (pure x) (pure Nil))`

`eval (Delr _) (Cons x y) = delr x y`

`eval (Exch _ _) (Cons x y) = pure (Cons y x)`

`eval (Assl _ _ _) (Cons x w) = do`

`Cons y z <- w`

`pure (Cons (pure (Cons x y)) z)`

`eval (Split f g) (Cons x y) = pure (Cons (x >>= eval f) (y >>= eval g))`

`eval (Cur f) x = pure (Fun (\y -> eval f (Cons (pure x) y)))`

`eval (Ap _ _) (Cons x y) = do`

`Fun f <- x`

`f y`

`eval (Alt f g) (X x) = eval f x`

`eval (Alt f g) (Y x) = eval g x`

`eval (Inl _ _) x = pure (X x)`

`eval (Inr _ _) x = pure (Y x)`

`eval (Dupl f) x = pure (Box (eval f x))`

`eval (Extr _) (Box x) = x`

`eval (Copy _) x = pure (Cons (pure x) (pure x))`

`eval (Drop _) x = pure Nil`

`eval (Comp f g) x = eval g x >>= eval f`

`eval (CallCC _) (Fun f) = callCC (\g -> f (pure (Fun (\x -> x >>= g))))`

That's the evaluator. The another great thing here is that these constructs are easy to typecheck.

Linear logic produces awfully lot of connectives, but at least it's easy to typecheck.

`data FT`

`= Var Int`

`| Unit`

`| Counit -- Dualizing object`

`| Tensor FT FT`

`| Exp FT FT`

`| Sum FT FT`

`| Ofc FT`

`deriving (Show, Eq)`

`par :: FT -> FT -> FT`

`par x y = Exp Counit (Tensor (Exp Counit x) (Exp Counit y))`

`prod :: FT -> FT -> FT`

`prod x y = Exp Counit (Sum (Exp Counit x) (Exp Counit y))`

`whynot :: FT -> FT`

`whynot x = Exp Counit (Ofc (Exp Counit x))`

`data NT`

`= Id Int -- a → a`

`| Nop -- I → I`

`| Insr FT -- A → A ⊗ I`

`| Delr FT -- A ⊗ I → A`

`| Exch FT FT -- A ⊗ B → B ⊗ A`

`| Assl FT FT FT -- A ⊗ (B ⊗ C) → (A ⊗ B) ⊗ C`

`| Split NT NT -- a ⊗ b`

`| Cur NT -- a → (a ⊗ b) ^ b`

`| Ap FT FT -- (a ^ b) ⊗ b → a`

`| Alt NT NT -- a+a → a`

`| Inl FT FT -- a → a+b`

`| Inr FT FT -- b → a+b`

`| Dupl NT -- a → !a where 'a' is compat.`

`| Extr FT -- !a → a`

`| Copy FT -- !a → !a ⊗ !a`

`| Drop FT -- !a → I`

`| Comp NT NT -- x.y`

`| CallCC FT -- U^(U^A) → A`

`deriving (Show, Eq)`

`is_valid :: NT -> Maybe (FT, FT)`

`is_valid (Id i) = pure (Var i, Var i)`

`is_valid (Nop) = pure (Unit, Unit)`

`is_valid (Insr x) = pure (Tensor x Unit, x)`

`is_valid (Delr x) = pure (x, Tensor x Unit)`

`is_valid (Exch x y) = pure (Tensor y x, Tensor x y)`

`is_valid (Assl x y z) = pure (Tensor (Tensor x y) z, Tensor x (Tensor y z))`

`is_valid (Split f g) = do`

`(x',x) <- is_valid f`

`(y',y) <- is_valid g`

`pure (Tensor x' y', Tensor x y)`

`is_valid (Cur f) = do`

`(x',x) <- is_valid f`

`case x of`

`Tensor y z -> pure (Exp x' z, y)`

`_ -> Nothing`

`is_valid (Ap x y) = do`

`pure (x, Tensor (Exp x y) y)`

`is_valid (Alt f g) = do`

`(x',x) <- is_valid f`

`(y',y) <- is_valid g`

`guard (x' == y')`

`pure (x', Sum x y)`

`is_valid (Inl x y) = do`

`pure (Sum x y, x)`

`is_valid (Inr x y) = do`

`pure (Sum x y, y)`

`is_valid (Dupl f) = do`

`(x',x) <- is_valid f`

`guard (is_dupl x)`

`pure (Ofc x', x)`

`is_valid (Extr x) = do`

`pure (x, Ofc x)`

`is_valid (Copy x) = do`

`guard (is_dupl x)`

`pure (Tensor x x, x)`

`is_valid (Drop x) = do`

`guard (is_dupl x)`

`pure (Unit, x)`

`is_valid (Comp f g) = do`

`(x',x) <- is_valid f`

`(y',y) <- is_valid g`

`guard (x == y')`

`pure (x', y)`

`is_valid (CallCC x) = do`

`pure (x, Exp Counit (Exp Counit x))`

`is_dupl :: FT -> Bool`

`is_dupl Unit = True`

`is_dupl (Tensor x y) = is_dupl x && is_dupl y`

`is_dupl (Sum x y) = is_dupl x && is_dupl y`

`is_dupl (Ofc _) = True`

`is_dupl _ = False`

The really interesting part of this evaluator is that it can "lift off the ground" already! The IO Monad can be substituted into the dualizing object.

`instance Eval (IO ()) where`

`delr x y = plug_io (forkIO (runCont y (const (pure ())))) >> x`

The "delr" describes a point where a computation can "separate" from the main computation, so we treat it as a fork of sort.

Next we can represent an interface, for example, This interface produces a result and then expects for a choice.

`request :: FT`

`request = Sum response response`

`response :: FT`

`response = prod Unit Unit`

`response_id = Cur (Ap Counit (Sum (Exp Counit Unit) (Exp Counit Unit)))`

And we implement it with a console. This could be anything else, including a non-IO object.

`console_impl :: Cont (IO ()) (Sem (IO ()))`

`console_impl = do`

`state <- plug_io getLine`

`case state of`

`"False" -> pure (X (Fun (>>=printout_impl)))`

`"True" -> pure (Y (Fun (>>=printout_impl)))`

`_ -> error "no parse"`

`where printout_impl :: Sem (IO ()) -> Cont (IO ()) (Sem (IO ()))`

`printout_impl (X (Fun f)) = do`

`plug_io (putStrLn "» False")`

`f (pure Nil)`

`printout_impl (Y (Fun f)) = do`

`plug_io (putStrLn "» True")`

`f (pure Nil)`

`plug_io :: IO a -> Cont (IO b) a`

`plug_io iofn = cont (iofn >>=)`

Next we can write programs that communicate with this interface.

`test_program1 :: NT`

`test_program1 = Alt`

`(choice Inl)`

`(choice Inr)`

`test_program1_valid = fmap snd (is_valid test_program1) == Just request`

`test_program2 :: NT`

`test_program2 = Alt`

`(choice Inr)`

`(choice Inl)`

`test_program2_valid = fmap snd (is_valid test_program2) == Just request`

The really interesting thing about this one is. This program doesn't form a proper function because the result doesn't come out from its arse, yet it still can enforce that well-typed program actually produces a result.

Here are some test runs. The "other_nop" -structure is to try mess up with the program.

`other_nop :: NT`

`other_nop = Delr Unit `Comp` Exch Unit Unit `Comp` Insr Unit`

`test1 = runCont (console_impl >>= eval test_program1) (const (pure ()))`

`test2 = runCont (console_impl >>= eval test_program2) (const (pure ()))`

`test3 = runCont (console_impl >>= eval test_program2 >>= eval other_nop) (const (pure ()))`

Here we pass two "consoles" in at the same time! We rely on Haskell's concurrency library to handle it right here and otherwise we could reject this program because linear logic doesn't require that we'd be able to duplicate a console. Also the "Delr" here is mandatory, the return type must be an unit.

`test4 = runCont`

`(pure (Cons console_impl console_impl)`

`>>= eval (Delr Unit `Comp` Split test_program1 test_program2))`

`(const (pure ()))`

Btw. The `Cont (IO ()) a`

can be replaced by `ContT () IO a`

.
This allows the interpretation of this code in different contexts
other than "IO".

I've written about representing products/exponentials and other categorical structures as graphical primitives. Eg. Implement a renderer as a monad that can be fed to an abstract machine, then the following composition forms a graphical primitive that remembers how it was constructed:

`arc3pt ∘ (pt - - - ⊗ pt - - -) ⊗ pt - - -`

`: (1 ⊗ 1) ⊗ 1 → Arc`

Linear types can capture crucial concepts in user interactions they make a nice language for representing user interfaces.

At this point I have to say this is amazing, but I didn't discover anything of this. Why most of it isn't in widespread use already?

I think I'll get to implementing few of these ideas, but first I got to get the foundations right. For that I have to resume on cartesian closed categories for a moment.

CCC is an interesting category because functors form a such category. Examining that category likely reveals a good way to implement structural recursion.

I'm also starting to understand crucial details about normalisation or "partial evaluation" of these structures. Expansion of functors and contraction of natural transformations seem to form the main tools to do it right.