# Small typechecking fragment for concurrent linear expression types

I conclude this week with a small fragment of the concurrent expression language. Weekly Haskell picked up the last week's post, so I thought this would be a treat for those people.

The language has resemblances to Haskell. Instead of intuitionistic logic it uses linear logic. Abstraction and application are broken into even more primitive units: binding, structure, choice and substitution.

I'm implementing a type checking routine using Python because we are likely needing some mutable constructs. Here's the Haskell description of the structures we're working with:

`data Formula = Var Integer Bool`

`| Conj Formula Formula`

`| Disj Formula Formula`

`| T | F`

`deriving (Show)`

The conjunction and disjunction form the multiplicative fragment of classical linear logic. These operators are associative and commutative.

To implement the type checking, we rely on the two common deep inference rules:

`true`

`────── ai↓`

`a ∨ ~a`

`(A ∨ C) ∧ B`

`─────────── s`

`(A ∧ B) ∨ C`

We can apply these rules anywhere inside formulas we have constructed. The idea is that you can construct a valid program for any formula that rewrites to empty clause using these rules.

Here's an overall example of how the type checking would progress.

`x = a + b`

`y = x + c`

These equations are rewritten into the type expression:

`a ∨ b ∨ c ∨ ~y ∨ (~a ∧ ~b ∧ x) ∨ (~x ∧ ~c ∧ y)`

Using the rules above, you can solve this expression. There are 6 choices you can do, but essentially they all end up producing the same result.

`((a ∨ ~a) ∧ (b ∨ ~b) ∧ x) ∨ (~x ∧ (c ∨ ~c) ∧ (y ∨ ~y))`

`──────────────────────────────────────────────────────`

`(true ∧ true ∧ x) ∨ (~x ∧ true ∧ true)`

`──────────────────────────────────────`

`x ∨ ~x`

`──────`

`true`

There is some promise the valid programs are non-deadlocking, consider the following example:

`x = a + y`

`y = x + c`

`a ∨ c ∨ (~a ∧ ~y ∧ x) ∨ (~x ∧ ~c ∧ y)`

`─────────────────────────────────────`

`((a ∨ ~a) ∧ ~y ∧ x) ∨ (~x ∧ (c ∨ ~c) ∧ y)`

`─────────────────────────────────────────`

`(true ∧ ~y ∧ x) ∨ (~x ∧ true ∧ y)`

`─────────────────────────────────`

`(~y ∧ x) ∨ (~x ∧ y)`

`──────────────────── 'x' -choice`

`(~y ∧ (x ∨ (~x ∧ y))`

`────────────────────`

`(~y ∧ (x ∨ ~x) ∧ y)`

`───────────────────`

`(~y ∧ (true ∧ y))`

`─────────────────`

`~y ∧ y`

`─────────────────── 'y' -choice`

`(~x ∧ (~y ∧ x) ∨ y)`

`───────────────────`

`(~x ∧ (~y ∨ y) ∧ x)`

`───────────────────`

`(~x ∧ (true ∧ x))`

`─────────────────`

`~x ∧ x`

You can verify the expression holds a loop:
computation of `x`

would wait for computation of `y`

indefinitely, and vice versa.

It appears that we can break the solution into many, finite steps by following two rules:

- Make minimal, productive but legal motions.
- Check whether that motion keeps the formula valid.

Brown matter would appear to fly when we apply
the `s`

rule such that a free variable in compound expression
moves into a conjunction with it's counterpart.
We can avoid it by ensuring that
the free variable sets of `C`

and `B`

remain disjoint.

`(A ∨ C) ∧ B`

`─────────── s`

`(A ∧ B) ∨ C`

`fv(C) ∩ fv(B) ≡ ∅`

What is a productive motion?

`fv(A) ∩ fv(C) ≢ ∅`

To implement the minimal motion, we have to solve disjunction by repeatedly taking the smallest unit we can move, and then apply the productive motion rule to reinsert it into it's place.

To take smallest mobile unit, we can do the following:

`get_head :: Formula -> Formula -> (Formula, Formula)`

`get_head (Disj l m) r = get_head l (Disj m r)`

`get_head l r = (l, r)`

However it may be simpler to just implement a function to simplify formulas and then use it to implement the typechecking. This has the benefit that we get some result that we can study if the typecheck fails.

`typecheck :: Formula -> Bool`

`typecheck a = (simplify a == T)`

`simplify :: Formula -> Formula`

`simplify (Var x p) = Var x p`

`simplify (Conj l r) = case (simplify l, simplify r) of`

`(F,x) -> F`

`(x,F) -> F`

`(T,x) -> x`

`(x,T) -> x`

`(x,y) -> Conj x y`

`simplify (Disj (Var x p) (Var x n)) | (p /= n) = T`

`simplify (Disj (Disj l m) r) = simplify (Disj l (Disj m r))`

`simplify (Disj l (Disj m r))`

`| (intersect l m) = simplify (Disj (simplify (Disj m l)) r)`

`| (intersect l r) = simplify (Disj m (simplify (Disj r l)))`

`simplify (Disj l (Conj a b))`

`| (intersect l a) && not (intersect l b) =`

`simplify (Conj (Disj a l) b)`

`| (intersect l a) && not (intersect l b) =`

`simplify (Conj a (Disj b l))`

`simplify (Disj l r) = case (simplify l, simplify r) of`

`(F,x) -> x`

`(x,F) -> x`

`(T,x) -> T`

`(x,T) -> T`

`(x,y) -> Disj x y`

As long as we somehow ensure that we won't encounter problems with
trivialization rule `true ∨ a = true`

, then this likely can work.
Btw. Here's how to intersect with bignum integer bitmaps:

`import Data.Bits (shift, xor, (.|.))`

`intersect l r = xor (fv l) (fv r) /= 0`

`fv (Var x p) = shift 1 x`

`fv (Cons a b) = fv a .|. fv b`

`fv (Disj a b) = fv a .|. fv b`

`fv T = 0`

`fv F = 0`

I haven't tested the above code yet. There are plenty of places where I could have messed it up.

## What next?

I took some pieces from symmetric interaction combinator projects Victor Maia had made recently, and put it into a small Python script. The interaction combinators could turn out to be an useful method to evaluate concurrent expression programs.

Recursion may turn out to be a similar problem as what it is for Hindley-Milner type systems, maybe a bit worse even. Fixed point issues only mean that Haskell programs can halt. For CLE -programs it might allow them to deadlock as well.

Additive operators provide an another set of disjunction and conjunction into the language. These primitives provide the idea of a choice and interaction into the mix.

We could implement boolean logic with the following truth tables:

`or = (F → F → F) | (T → F → T) | (F → T → T) | (T → T → T)`

`and = (F → F → F) | (T → F → F) | (F → T → F) | (T → T → T)`

`not = (F → T) | (T → F)`

The types for these expressions are easy to construct, but if we were to then go and implement trinary-or:

`or(a → b → x), or(x → d → y)`

Last time when I picked the rules and tried this, I got a table roughly of the following shape:

`(F → F → F → F) | (T → F → F → T) | (F → T → F → T) | (T → T → F → T) |`

`(F → F → T → T) | (T → F → T → T) | (F → T → T → T) | (T → T → T → T)`

While this seems nice and all that, it's proposing that something gets evaluated way too early now.

Finally there are exponentials in linear logic that I haven't even considered in any manner yet. They introduce duplication and co-duplication into the language, if implemented.

## Python program

Now lets examine that Python program yet closer.

`# -*- encoding: utf-8 -*-`

`def main():`

`pa = Var(0, False)`

`na = Var(0, True)`

`pb = Var(1, False)`

`nb = Var(1, True)`

`pc = Var(2, False)`

`nc = Var(2, True)`

`px = Var(3, False)`

`nx = Var(3, True)`

`py = Var(4, False)`

`ny = Var(4, True)`

`expr1 = disj(disj(pa,disj(pb,disj(pc,ny))), disj(`

`conj(na, conj(nb, px)),`

`conj(nx, conj(nc, py))))`

`print "check", expr1`

`print simplify(expr1)`

`expr2 = disj(disj(pa,pc), disj(`

`conj(na, conj(ny, px)),`

`conj(nx, conj(nc, py))))`

`print "check", expr2`

`print simplify(expr2)`

The output of this program is:

`check ((p0∨(p1∨(p2∨n4)))∨((n0∧(n1∧p3))∨(n3∧(n2∧p4))))`

`T`

`check ((p0∨p2)∨((n0∧(n4∧p3))∨(n3∧(n2∧p4))))`

`((n3∧p4)∨(n4∧p3))`

The expressions correspond to the earlier ones that we covered. The program correctly captures the cycle in the later example.

Defining new structures in Python take lot more space than they do in Haskell.

`# Linear logic typechecking structures`

`class Var:`

`def __init__(self, index, negative):`

`self.index = index`

`self.fv = {index} # 'fv' stands for 'free variables' -set.`

`self.negative = negative`

`def __repr__(self):`

`if self.negative:`

`return "n{}".format(self.index)`

`else:`

`return "p{}".format(self.index)`

`class Operator:`

`def __init__(self, op, lhs, rhs):`

`self.op = op # CONJ, DISJ`

`self.lhs = lhs`

`self.rhs = rhs`

`self.fv = lhs.fv | rhs.fv`

`def __repr__(self):`

`return "({}{}{})".format(`

`self.lhs, self.op.encode('utf-8'), self.rhs)`

`class Constant:`

`def __init__(self, name):`

`self.name = name`

`self.fv = set()`

`def __repr__(self):`

`return self.name`

`T = Constant("T")`

`F = Constant("F")`

Additionally I gave myself some helper functions to make it a bit easier. Definition of new structures is singlehandedly most annoying thing to do in Python.

`def conj(*args):`

`return Operator(CONJ, *args)`

`def disj(*args):`

`return Operator(DISJ, *args)`

`CONJ = u"∧"`

`DISJ = u"∨"`

The simplification formula follows pretty much the same story as before. Only difference to the earlier is that this program picks up the variables first from the disjunction groups.

`def simplify(formula):`

`if not isinstance(formula, Operator):`

`return formula`

`if formula.op == CONJ:`

`lhs = simplify(formula.lhs)`

`rhs = simplify(formula.rhs)`

`if lhs is T: # Unit cases`

`return rhs`

`elif rhs is T:`

`return lhs`

`elif lhs is F or rhs is F: # Trivial cases`

`return F`

`else:`

`return conj(lhs, rhs)`

`assert formula.op == DISJ, "unhandled operator in {}".format(formula)`

`stack = []`

`ungroup(formula, stack)`

`stack.sort(key=lambda a: isinstance(a, Var))`

`while len(stack) > 1:`

`lhs = stack.pop()`

`for i in range(len(stack)):`

`rhs = stack[i]`

`if isinstance(rhs, Var) and lhs.index == rhs.index:`

`if lhs.negative ^ rhs.negative:`

`return T`

`if isinstance(rhs, Operator) and intersect(lhs, rhs):`

`if intersect(rhs.lhs, lhs) and intersect(rhs.rhs, lhs):`

`continue`

`elif intersect(rhs.lhs, lhs):`

`stack[i] = conj(disj(lhs, rhs.lhs), rhs.rhs)`

`else:`

`stack[i] = conj(rhs.lhs, disj(lhs, rhs.rhs))`

`break`

`else:`

`return roll(stack, lhs)`

`return simplify(stack[0])`

When the simplification of the expression fails into primitive unit, I found it neat to still simplify the sub-expressions so that the result has only the things that could not be worked with.

`def roll(stack, rhs):`

`rhs = simplify(rhs)`

`if rhs is T:`

`return T`

`while len(stack) > 0:`

`top = simplify(stack.pop())`

`if top is T:`

`return T`

`rhs = disj(top, rhs)`

`return rhs`

In the Haskell example I use sort of rewriting rules to form a list from the disjunctive elements. At Python code it makes sense to produce an actual list of the elements and work on them.

`def ungroup(formula, group):`

`if isinstance(formula, Operator) and formula.op == DISJ:`

`ungroup(formula.lhs, group)`

`ungroup(formula.rhs, group)`

`else:`

`group.append(formula)`

`def intersect(lhs, rhs):`

`return len(lhs.fv & rhs.fv) != 0`

`if __name__=="__main__":`

`main()`

The next week is probably dedicated for something else again. When I keep these notes, I remember fairly well where I was going and can resume whenever motivated to look into it again.

I'm also listening ideas from the readers of this post. CLE is an experiment in creation of a normalization-as-computation -based language founded on linear logic. It's not a pet language or some perfectly thought out plan unfolding. If you come up with ideas on how to solve something here, you can show it to me.