Typechecking for Classical Linear Logic sequents

This post contains a type checker for classical linear logic sequents, written in Haskell.

Lets start by encoding types.

``````data Ty
``````    = Terminal | Initial | TensorUnit | ParUnit
``````    | Prod Ty Ty | Sum Ty Ty | Tensor Ty Ty | Par Ty Ty
``````    | OfCourse Ty | WhyNot Ty
``````    | Pos Int | Neg Int
````    deriving (Show, Eq)````

These types can be grouped into additives, multiplicatives and exponentials. The main property of this type system is that every type has a de-morgan dual.

``````dual :: Ty -> Ty
``````dual Terminal = Initial
``````dual TensorUnit = ParUnit
``````dual (Prod x y) = Sum (dual x) (dual y)
``````dual (Sum x y) = Prod (dual x) (dual y)
``````dual (Tensor x y) = Par (dual x) (dual y)
``````dual (Par x y) = Tensor (dual x) (dual y)
``````dual (OfCourse x) = WhyNot (dual x)
``````dual (WhyNot x) = OfCourse (dual x)
``````dual (Pos i) = (Neg i)
````dual (Neg i) = (Pos i)````

The duality extends to sequents as well. If you have a sequent that is a proof for some proposition, then it's a refutation for the dual proposition.

``````polarity :: Ty -> Bool
``````polarity Initial      = True
``````polarity TensorUnit   = True
``````polarity (Prod _ _)   = True
``````polarity (Tensor _ _) = True
``````polarity (OfCourse _) = True
``````polarity (Pos _)      = True
``````polarity Terminal     = False
``````polarity ParUnit      = False
``````polarity (Sum _ _)    = False
``````polarity (Par _ _)    = False
``````polarity (WhyNot _)   = False
````polarity (Neg _)      = False````

Polarity isn't too important for typechecking, but it appears to be useful for evaluating terms.

Now we'd have to encode the CLL sequents as terms. It can help if you notice that these sequents have lot of redundant structure which means that some of them can slide past each other. This allows dividing inference steps to "positive" and "negative".

``````data Tm
``````    = Cut Ty Tm Tm
``````    | Pair Tm Tm
``````    | Opt Tm Tm
``````    | Exp Tm
``````    | Tt
``````    | Ax
``````    | Absurd Int
``````    | Sr [Tn] Tm
``````    deriving (Show, Eq)
``````
``````data Tn
``````    = Drop
``````    | Dup Tn Tn
``````    | Inst Tn
``````    | Var Int
``````    | Ff
``````    | Inl Tn
``````    | Inr Tn
``````    | Split Tn Tn
````    deriving (Show, Eq)````

Positive terms introduce some structure or "control flow" whereas negative terms behave like signals.

I'm going to explain what these structures mean when introducing typechecking. For now notice that even although if you removed the "Cut" this structure would not represent a normal form because some terms here can still slide past each other.

Type checking

Every "Tm" represents a sequent with a context. Therefore these structures have an arity.

``````arity :: Tm -> Int
``````arity (Sr ctx _)  = length ctx
``````arity (Cut _ a b) = arity a + arity b - 2
``````arity (Pair a b)  = arity a + arity b - 1
``````arity (Opt a b)   = arity a
``````arity (Exp a)     = arity a
``````arity Tt = 1
``````arity Ax = 2
````arity (Absurd n) = n````

Arity is either fixed, or then it's a sum of the composed sequents. In `Cut` two variables are eliminated so the arity decreases by 2. In `Pair` two variables are combined, so arity decreases by 1. `Opt` arity is measured from left side and the result of this function is only correct if the term is valid.

Type checking takes a type for every variable in the context and decides whether a term forms a valid proof for the proposition that the type stands for.

``````check :: [Ty] -> Tm -> Bool
``````check env (Sr ctx a)
``````    = case infer_env env ctx (init_ctx (arity a)) >>= full of
``````        Just env' -> check env' a
``````        Nothing   -> False
``````check env (Cut ty a b) = let
``````    (env1, env2) = splitAt (arity a-1) env
``````    in check (ty:env1) a && check (dual ty:env2) b
``````check (Tensor x y:env) (Pair a b) = let
``````    (env1, env2) = splitAt (arity a-1) env
``````    in check (x:env1) a && check (y:env2) b
``````check (Prod x y:env) (Opt a b)  = check (x:env) a && check (y:env) b
``````check (OfCourse ty:env) (Exp a) = all is_whynot env && check (ty:env) a
``````check [TensorUnit] Tt           = True
``````check [t,v]        Ax = polarity v && (dual t == v)
``````check env (Absurd n)  = any terminal_present env && (length env == n)
``````check _   _           = False
``````
``````infer_env :: [Ty] -> [Tn] -> [Maybe Ty] -> Maybe [Maybe Ty]
``````infer_env []       []     ctx = Just ctx
``````infer_env (ty:tys) (x:xs) ctx = infer ty x ctx >>= infer_env tys xs
``````infer_env _        _      _   = Nothing
``````
``````infer :: Ty -> Tn -> [Maybe Ty] -> Maybe [Maybe Ty]
``````infer ty            (Var n)     ctx = edit_ctx n ty ctx
``````infer (WhyNot _)    Drop        ctx = Just ctx
``````infer ty@(WhyNot _) (Dup x y)   ctx = infer ty x ctx >>= infer ty y
``````infer (WhyNot ty)   (Inst x)    ctx = infer ty x ctx
``````infer ParUnit       Ff          ctx = Just ctx
``````infer (Sum a _)     (Inl x)     ctx = infer a x ctx
``````infer (Sum _ b)     (Inr y)     ctx = infer b y ctx
``````infer (Par a b)     (Split x y) ctx = infer a x ctx >>= infer b y
````infer _             _           _   = Nothing````

I threw that all out at once since it ended up being fairly short. Now explaining these structures.

• `Sr` restructures the context and constructs terms corresponding to negative types.
• `Cut` eliminates two variables that are dual to each other. The variables must appear in opposite contexts that are then merged.
• `Pair` constructs a tensor by pairing two variables.
• `Opt` constructs an option that's resolved by a suitable sum type.
• `Exp` constructs anything that can be duplicated and erased.
• `Tt` constructs a tensor unit.
• `Ax` eliminates two variables but in an another direction.
• `Absurd` can be used when there's a structure in context that cannot be constructed.

The `Drop`,`Dup`,`Inst` provide counterparts for the `Exp`, Likewise `Inl`,`Inr` counterpart the `Opt`. `Ff` forms a par unit to counterpart the tensor unit. Likewise `Split` is counterpart for the `Pair`.

The context-handling ensures that variables are used up exactly once.

``````init_ctx :: Int -> [Maybe Ty]
``````init_ctx n = replicate n Nothing
``````
``````edit_ctx :: Int -> Ty -> [Maybe Ty] -> Maybe [Maybe Ty]
``````edit_ctx 0 ty (Nothing:xs) = Just (Just ty:xs)
``````edit_ctx n ty (x:xs) = (x:) `fmap` edit_ctx (n-1) ty xs
``````edit_ctx _ _ _ = Nothing
``````
``````full :: [Maybe a] -> Maybe [a]
``````full []          = Just []
``````full (Just x:xs) = (x:) `fmap` full xs
````full _           = Nothing````

The `Absurd` and `Exp` requires that the context is constrained in a certain way.

``````is_whynot :: Ty -> Bool
``````is_whynot (WhyNot _) = True
``````is_whynot _ = False
``````
``````terminal_present :: Ty -> Bool
``````terminal_present Terminal = True
``````terminal_present (Opt a b) = terminal_present a || terminal_present b
``````terminal_present (Par a b) = terminal_present a || terminal_present b
``````terminal_present (WhyNot a) = terminal_present a
````terminal_present _ = False````

Now we should be able to typecheck some sequents. Lets try the SKI combinators.

``````i_type = [Par (OfCourse (Neg 0)) (WhyNot (Pos 0))]
``````i_prog = [Split (Var 0) (Var 1)] `Sr` Exp ([Var 0, Inst (Var 1)] `Sr` Ax)
``````
``````k_type = [Par (Par (OfCourse (Neg 0)) (WhyNot (Pos 1))) (WhyNot (Pos 0))]
``````k_prog = [Split (Split (Var 0) (Var 1)) (Var 2)] `Sr` Exp
``````    ([Var 0, Drop, Inst (Var 1)] `Sr` Ax)
``````
``````s_type = [Par (Par (Par (OfCourse (Neg 2))
``````    (WhyNot (Pos 0)))
``````    (WhyNot (Tensor (Pos 1) (Neg 0))))
``````    (WhyNot (Tensor (Tensor (Pos 2) (Neg 1)) (Neg 0)))]
``````s_prog = [Split (Split (Split (Var 0) (Var 1)) (Var 2)) (Var 3)] `Sr` Exp
``````    ([Var 1, Dup (Inst (Var 3)) (Inst (Var 4)), Inst (Var 2), Inst (Var 0)] `Sr`
``````        Pair
``````            (Pair ([Var 1, Var 0] `Sr` Ax)
``````                  ([Var 1, Var 0, Var 2] `Sr` Pair
``````                      ([Var 1, Var 0] `Sr` Ax)
``````                      Ax))
````            Ax)````

Next it'd be time to consider how to normalise these structures.

Evaluation with MVars

There's a paper that implements interaction nets with haskell's MVars.

The evaluation involves translating from types into semantic values. We are building this kind of a communication "protocol" that passes mutable variables around and labels them.

``data Sem = Nil | Cons (MVar Sem) (MVar Sem) | X (MVar Sem) | Y (MVar Sem)``

We pick sequences of empty variables and treat them as the execution environment.

``````blank_env :: Int -> IO [MVar Sem]
````blank_env n = mapM id (replicate n newEmptyMVar)````

Evaluation takes a term and interprets it in the given environment. Positive terms become "actors" that interface through mutable variables. They are activated through `eval`.

``eval :: Tm -> [MVar Sem] -> IO ()``

`Ax` is like a schoolboy, what goes from one ear in goes out from another.

``````eval Ax [dst,src] = do
``````    forkIO (readMVar src >>= putMVar dst)
````    pure ()````

`Tt` is also like a schoolboy, well, in an another way. It's doing nothing.

``````eval Tt [src] = do
``````    forkIO (readMVar src >>= const (pure ()))
````    pure ()````

`Sr` sends negative terms as "messages".

``````eval (Sr ctx a) env = do
``````    env' <- plug_ctx ctx env (init_box (arity a))
````    eval a env'````

Put two schoolboys ear-to-ear and you can route information around. Thats what `Cut` is for.

``````eval (Cut _ a b) env = do
``````    let (env1, env2) = splitAt (arity a) env
``````    k <- newEmptyMVar
``````    eval a (k:env1)
````    eval b (k:env2)````

Pair splits a cake.

``````eval (Pair a b) (p:env) = do
``````    let (env1, env2) = splitAt (arity a) env
``````    forkIO \$ do
``````        Cons p1 p2 <- readMVar p
``````        eval a (p1:env1)
``````        eval b (p2:env2)
````    pure ()````

Option selects a branch.

``````eval (Opt a b) (p:env) = do
``````    forkIO \$ do
``````        i <- readMVar p
``````        case i of
``````            X x -> eval a (x:env)
``````            Y y -> eval b (y:env)
````    pure ()````

Exponents are things that are cheap to copy around.

``````eval (Exp a) (p:env) = do
``````    forkIO (exponent_impl p env)
``````    pure ()
``````    where
``````        n = arity a
``````        exponent_impl q e = do
``````            i <- readMVar q
``````            case i of
``````                X x -> eval a (x:env)
``````                Cons x y -> do
``````                    e1 <- blank_env n
``````                    e2 <- blank_env n
``````                    mapM_ id (zipWith putMVar e (zipWith Cons e1 e2))
``````                    exponent_impl x e1
``````                    exponent_impl y e2
````                Nil -> mapM_ (\m -> putMVar m Nil) e````

Finally we have things that aren't activated because valid environment cannot be built for them.

``eval (Absurd _) env = undefined``

Negative types manifest as different kind of messages that are sent around. The `Drop` becomes `Nil`, `Inst n` becomes `X n`. Likewise `Inl x` and `Inr y` become `X x` and `Y y`. `Split x y` can be thought of as a way to "open" a pair.

``````plug_ctx :: [Tn] -> [MVar Sem] -> [MVar Sem] -> IO [MVar Sem]
``````plug_ctx []     []     box = pure box
``````plug_ctx (x:xs) (v:vs) box = plug x v box >>= plug_ctx xs vs
``````
``````plug :: Tn -> MVar Sem -> [MVar Sem] -> IO [MVar Sem]
``````plug (Var n)   v box = pure (edit_box n v box)
``````plug Drop      v box = putMVar v Nil >> pure box
``````plug (Dup x y) v box = do
``````    a <- newEmptyMVar
``````    b <- newEmptyMVar
``````    putMVar v (Cons a b)
``````    plug x a box >>= plug y b
``````plug (Inst x)  v box = do
``````    a <- newEmptyMVar
``````    putMVar v (X a)
``````    plug x a box
``````plug Ff v box      = putMVar v Nil >> pure box
``````plug (Inl x) v box = do
``````    a <- newEmptyMVar
``````    putMVar v (X a)
``````    plug x a box
``````plug (Inr y) v box = do
``````    b <- newEmptyMVar
``````    putMVar v (Y b)
``````    plug y b box
``````plug (Split x y) v box = do
``````    a <- newEmptyMVar
``````    b <- newEmptyMVar
``````    putMVar v (Cons a b)
````    plug x a box >>= plug y b````

When "plugging" values into a context, we construct a structure that isn't filled and would cause the program to crash if it was accessed before it was filled.

``````init_box :: Int -> [a]
``````init_box n = replicate n undefined
``````
``````edit_box :: Int -> a -> [a] -> [a]
``````edit_box 0 v (_:xs) = (v:xs)
``````edit_box n v (x:xs) = (x:edit_box (n-1) v xs)
````edit_box n v []     = [v]````

So that's an evaluator.

I think it'd be possible to turn this evaluator into a normalizer, but it'd be fairly unappealing and would correspond to running the program with every variation of input it can react to.

Serving suggestion

Another thing I didn't do, I didn't try this out. But I think it can be a fun thing. It's like a functional programming environment but you can give types for interactions.

For example, echo -program would be:

``echo :: Stream `Par` ~Stream``

You can model the communication by stacking up interactions. Eg. the Stream would be:

``Stream = Sum TensorUnit (Opt TensorUnit (TextBlock `Tensor` Stream))``

To implement a "Stream", you'd send signal of whether you're closed or not, then prepare for either closing or then continue producing results.

Note that such notion of a stream could be entirely denotational. The type isn't saying which stream it is, and if you get something to implement the type, you can treat it as a stream.

I'd think you can implement the evaluator in Javascript. Build the "communication protocol" from promises.

Also it's a fairly fun program representation and you don't need to leave traditional functional programming style behind, typed lambda calculus translates in several ways into classical linear logic.

You can download CLL.hs and try it out. It's embedded in this blogpost.

The type checker is about 130 lines.

Concluding remarks

Optimal reduction could be a nice algorithm for evaluating and normalizing classical linear logic sequents but it has the problem of accumulating control operators, Garbage nodes, croissants and brackets, that eventually clog up the reduction. The algorithm likely removes something it should keep along during the reduction.

Building models like these maybe reveals what optimal reduction would need in order to collect its garbage nodes.