Concatenative calculus corresponding to linear logic

Lambda calculus tend to be ubiquitous in teaching material because it's so simple that you can describe the rules in a tiny grammar:

e ::= x ↦ e | e e | x

This minimal language is expressive enough to qualify as a serious programming language in its own right. However when practical toolkits are founded on lambda calculus, they tend to lose this simplicity and grow in complexity.

Rather than being a competence issue, there is a problem in simply typed lambda calculus that demands the expansion. Simply typed lambda calculus successfully encodes only propositional logic.

This is solved in multiple ways and for intuitionistic predicate logic there seem to be ways to achieve simple systems. When linear logic is encoded in lambda calculus, it is no longer a simple language at all.

It's also not a simple thing to fix by switching a calculus. I've tried several designs now and they repeat the same issues, growing in complexity when the linear logic fragment should be extended further from its multiplicative fragment.

Contemplating this tricky problem I remembered a paper. "An Algorithmic Interpretation of a Deep Inference System", written by Kai Brünnler and Richard McKinley.

Grammar

The grammar for the whole language is described as:

t ::= tcon | x | t t | ∀x:t.t | ∃x:t.t | x[y/] | x[!y/]
e ::= e . e | tcon | constant | e e | ∀x:t.e | ∃x:t.e

With linear logic, we provide the following type constructors:

A ⊗ A
A ⊕ A
A & A
A ⅋ A
unit
bottom
!A
?A
~A

The type constructors double as rules to build proofs. For example if we have a ⊗ b as an expression, we treat it as if it was a proof composed with the function f ↦ g ↦ (x,y) ↦ (f x, g y) that has a type (a → b) → (c → d) → a ⊗ c → b ⊗ d.

Additionally we have 38 constants that declare the inference rules present:

id : a → a
fapp y  : (∀x:A. p) → p[y/x]
fvan    : p[*/x] → (∀x:A. p*)             (p*) x does not occur in p
eproj y : p[y/x] → (∃x:A. p)
evan    : (∃x:A. p*) → p[*/x]             (p*) x does not occur in p
sll : (a⅋c)⊗b → (a⊗b)⅋c
srl : a⊗(b⅋c) → (a⊗b)⅋c
slr : (c⅋a)⊗b → c⅋(a⊗b)
srr : a⊗(c⅋b) → c⅋(a⊗b)
cut : a⊗~a → ~1
ax  : 1 → a⅋~a
hcl : x&y → x
hcr : x&y → y
hil : (a⅋c) & (b⅋c) → (a&b) ⅋ c
hir : (c⅋a) & (c⅋b) → c ⅋ (a&b)
hw  : x → x&x
dcl : x → x⊕y
dcr : y → x⊕y
dil : (a⊕b) ⊗ c → a⊗c ⊕ b⊗c
dir : c ⊗ (a⊕b) → c⊗a ⊕ c⊗b
dw  : x⊕x → x
qdup  : ?a ⅋ ?a → ?a
qdrop : ~1 → ?a
qins  : a → ?a
ql : !a ⊗ ?b → ?(!a ⊗ b)
qr : ?a ⊗ !b → ?(a ⊗ !b)
edup  : !a → !a ⊗ !a
edrop : !a → 1
eins  : !a → a
el : !(?a ⅋ b) → ?a ⅋ !b
er : !(a ⅋ ?b) → !a ⅋ ?b
com : a ∘ b → b ∘ a
asl : (a ∘ b) ∘ c → a ∘ (b ∘ c)
asr : a ∘ (b ∘ c) → (a ∘ b) ∘ c
z  : 1 → 0
za : 0 → 0 ⅋ c
x  : ~0 → ~1
xa : ~0⊗c → ~0

Update: If you are not certain about some of these rules, most of them can be thrown into llprover for a check.

This ought provide the inference rules for the full fragment of linear logic. It's not minimal in the sense that we could likely implement many of the rules in terms of the other rules. Most rules have a dual that's also named.

The free variables in inference rules may be substituted out, for example. ax nat would produce 1 → nat ⅋ ~nat.

For type signatures there's a canonicalization rule that ensures they're in proper form. It applies substitution rules downwards in the tree, eliminates unit expressions with rules such as 1 ⊗ a ≡ a and eliminate negation using the following rules:

~(A ⊗ A) ≡ ~A ⅋ ~A
~(A & A) ≡ ~A ⊕ ~A
unit ≡ ~unit
bottom ≡ ~bottom
~!A ≡ ?~A
~∀x:A.p ≡ ∃x:A.~p

Type checking is a top-down&bottom-up procedure that instantiates the rules and applies unification and constraint solving to produce a type signature for the expression.

During the stop-down traverse the expression is normalized and the quantifiers are type checked. The bottom-up procedure solves the constraints and produce composed types for expressions.

To solve the quantifiers there is explicit syntax for substitution. The substitutions are treated as constraints and eliminated during unification. It should be fairly straightforward if we use the following rules to do it:

p[x/] = q[y/] ├ p=q, x=y
p[/] = q[y/]  ├ q = (p[/])[!y/]
t = p[x/]     ├ p = t[!x/]
t[!x/] = q    ├ t = q[x/]

fvan and evan require occurrence constraints. The occurence constraints require additional checks and a bit of additional propagation rules during substitution. These "quantifier vanishing" rules allow quantifiers to be proved through assumptions.

Evaluation of programs is done through a bunch of rewriting rules. The simplest rewriting rule is the composition of two type constructors:

a ∘ b . c ∘ d ≡ (a.c) ∘ (b.d)

More fairly simple, trivial rules:

x . id . y ≡ x . y
com . com ≡ id
asl . asr ≡ id
fvan . fapp x ≡ id
a & b . hcl ≡ a
a & b . hcr ≡ b
hw . hcl ≡ id

ax ⊗ a . slr . b ⅋ cut ≡ a . b
edup . a ⊗ edrop ≡ a
ax . com ≡ ax

I have a concern here when it comes to axiom and cut rule. Given enough structure, I am not certain at all that every cut could be eliminated through rewriting only.

There is some indication that things go just well though. It's the following kind of rules that can be applied:

ax ⊗ a . srl ≡ a . (ax ⊗ id) ⅋ id

Likewise most objects seem to have a clear direction of travel that causes them to eventually reduce down to id. This suggests that some rules behave like structure whereas other rules follow some common strategy for traveling through them.

This whole system could be implemented in perhaps few thousand lines of code. But is it any good?

We should be able to run computations and prove theorems with it. Lets do few dry-ground tries and write code that we cannot yet run. After all, we know how to type check it on paper and that's enough to try it!

Example: SKI-combinators

For a light start, lets define SKI-combinators with this calculus. The SKI-combinators are these three:

I = x ↦ x
K = x ↦ y ↦  x
S = x ↦ y ↦ z ↦ xz(yz)

We can start by defining calls. They're of the form:

app :: (?~a ⅋ b) ⊗ !a → b

The program satisfying this definition is:

app ≡ com ⊗ id . slr . id ⅋ cut

The I can just be encoded with an axiom.

I :: unit → ?~a ⅋ a
I ≡ ax . id ⅋ eins

K can be encoded with with axiom and weakening:

K :: unit → ?~a ⅋ (?~b ⅋ a)
K ≡ ax . id ⅋ (qdrop ⅋ id)

S can be encoded with axiom, weakening and contraction:

S :: unit → ?(!a ⊗ (!b ⊗ ~c)) ⅋ (?(!a ⊗ ~b) ⅋ (?~a ⅋ c))
S ≡ ax
  . ((id ⊗ ax . srl) ⅋ qdup)
  . asr
  . asl ⅋ com
  . id ⊗ (id ⊗ ax . srr)
  . srl
  . asl
  . (id ⊗ ql . ql) ⅋ (ql ⅋ id)

Anyway now we can construct SKI-expressions. To make it a bit easier we can provide an operator for putting it together:

a @ b ≡ a ⊗ !b . app

Example: Addition

To prove theorems, we'd start from some basic things.

nat ≃ (1 ⊕ nat) 
zero : 1 → nat
succ : nat → nat
match nat : nat → (1 ⊕ nat)

I am not certain how this would be simply defined in a working implementation. Despite that, I know that we'd end up with these rules because they're fairly well explained in existing proof assistants.

Additionally we need the induction rule for the newly defined natural numbers:

nat_ind : ∀P:(nat → type).
  (P zero) ⊗ !(∀n:nat. ?~(P n) ⅋ P (succ n)) → ∀n:nat. P n

It ensures we can form recursive clauses as long as the rules of induction are obeyed.

Recall addition in peano numerals would be:

a + zero ≡ a
a + succ b ≡ succ (a+b)

To encode it with nat_ind and this calculus, lets see...

add : nat ⊗ nat → nat
add ≡ (match nat ⊗ id) . dil . (id ⊕ _). dw

Here's a sketch, but we have nothing to place into the hole there. The hole is of the form nat ⊗ nat → nat, so we could place add. We could add recursion if we wanted just a programming language, but without obtaining a theorem prover the effort would be wasted.

Recall that in lambda calculus there was fixed point combinator to avoid the need to introduce recursive clauses and still break the language. The type signature for the fixed point combinator was:

(a → a) → a

If we had a fixed-point combinator in our calculus, we'd produce a construct to fit into ?~a ⅋ a. Here's what it would be:

1 → ?((nat ⅋ nat) ⊗ ~nat) ⅋ ((~nat ⊗ ~nat) ⅋ nat)

To obtain this you convert the add's -signature by replacing the expression a → b with ?~a ⅋ b then normalize.

Lets try apply induction by replacing P n with nat. I suggest we allow opening of a quantifier with application, like this: (∀x:nat. nat) zero ≡ nat. Then we could do the following:

nat_ind (∀x:nat. nat) : nat ⊗ !(∀n:nat. ?~nat ⅋ nat) → ∀n:nat. nat

We still end up with a problem though. This expression requires that we'd pass the first natural number as an argument, but then pass the another natural number from "upwards" -direction with fapp x -rule.

However that is quite impossible. We probably need an another rule:

fdep : (∀x:a. b) ⊗ a → b

I did leave this one out at first, because I was not certain that it would have been important to get this to work. The amount of inference rules count up to 40 now.

Well now that we have it.. We can do this.

add ≡ (id ⊗ !(ax . qins ⅋ succ) . nat_ind (∀x:nat. nat)) ⊗ id . fdep

I think it typechecks and does exactly what was intended.

Example: Commutativity of addition

Now we'd probably like to prove that our concept of addition is commutative. But there's a problem.

Our addition is defined as: nat ⊗ nat → nat. What we would want to prove... is that.

same (nat ⊗ nat → nat) (add) (com . add)

At this point a wheel popped from my mental unicycle and a few more spokes went loose. Had to stop for a moment.. I think not doing this entirely in this post.

Image of an unicycle

First of all... I haven't prepared to needing to insert a type (nat ⊗ nat → nat) such that it was explicitly plugged into an expression.

Lets cheat a bit and look into type theory and existing stuff.

The induction principle states that to prove P(a) for all natural numbers 'a', we only have to prove: P(0) and ∀a:nat. P(a) → P(succ(a)).

I've examined how to do it there before, so I know that to prove a+b ≡ b+a I would have to prove a+0 = 0+a and (a+b ≡ b+a) → (a+succ(b) ≡ succ(b)+a).

Proving succ(b)+a = succ(b+a) and a = 0+a using induction rule again was necessary.

Could we prove the simpler one? a = 0+a? Would it be?

same (nat → nat) (id) (zero ⊗ id . add)

Looks like right.

So precisely we'd like to prove that...

same (nat → nat)
    (id)
    (zero ⊗ id
    . (id ⊗ !(ax . qins ⅋ succ) . nat_ind (∀x:nat. nat)) ⊗ id . fdep)

Reducing it a bit will result in.

same (nat → nat)
    (id)
    ((zero ⊗ !(ax . qins ⅋ succ) . nat_ind (∀x:nat. nat)) ⊗ id . fdep)

In intuitionistic world we'd reduce this down to proving 0 = 0+0 and (n = 0+n) → (succ n = 0+succ n). In our calculus they would look like this:

same (1 → nat) zero (zero ⊗ zero . add) 

(same (nat → nat) id (zero ⊗ id . add))
→ (same (nat → nat) succ (zero ⊗ succ . add))

The first one reduces trivially into something that we can take as an axiom. We could invent a construct such as refl : 1 → same a b b to work this. At this point we can't do the type checking without doing reduction for the expressions inside the same -clause though. Also our type-checking algorithm would have to work through proof expressions!

To do the second one... We have to.. Well I guess I could apply succ into the assumption expression to obtain the result. I guess you'd do something like ∀a:(nat → nat).(succ . a). And it'd need bit of extension such that it can be handled, again..

The stuff should be familiar if you've used an interactive theorem prover before. And read some type theory, maybe.

Note that the reduction may not always proceed to produce the same "normal form". This is something to keep in mind because the type-checker would have to check the equivalence between two program structures. It won't succeed on every run if the structure ends up into a different form depending on details of how it was evaluated.

But maybe things won't need to succeed like that? Maybe we could search among the reduction paths? I'm not certain about that, it might take a long time to try every branch and obtainable normal form.

Example: Hanoi discs (ended up being just an attempt)

Before I tried just the simple things with addition, I tried to formulate the hanoi puzzle. Now that I've tried the addition I think I'll try again later.

Recall the hanoi puzzle post from few weeks ago. I explained how to do it, but wouldn't it be fun if you could demonstrate it with this horrible calculus?

Recall the signature for motion is:

motion : ∀a∀b∀c:disc.∀i∀j:index.
  !(a<b) ⊗ top(a,i) ⊗ top(b,j) ⊗ above(a,c)
  → top(c,i) ⊗ top(a,j) ⊗ above(a,b)

Declaring stack of discs growing downwards:

∃a,b:disc.∃i:index.
stack(a, b, i) ≡
    (top(a, i) ⊗ above(a,b) ⊗ !(a<b)) 
    ⊕ (∃u:disc. stack(u, a, i) ⊗ above(a,b) ⊗ !(a<b))

Recall that we want to prove we can move a stack of discs from one place to an another.

∀a,b,x:disc.∀i,j:index.
stack(a,x,i) ⊗ top(b,j) → top(x,i) ⊗ stack(a,b,j)

We would have to add some more antecedents to allow it to be solved, but for now lets look at what we need to prove. We need to prove that it can be done with just one disc:

∀a,b,x:disc.∀i,j:index.
(top(a, i) ⊗ above(a,x) ⊗ !(a<x)) ⊗ top(b,j) → top(x,i) ⊗ stack(a,b,j)

Then we need to prove that it can be done with many discs:

∀a,b,x,u:disc.∀i,j:index.
(∀d:disc.∀k:index. stack(u,a,i) ⊗ top(d,k) → top(a,i) ⊗ stack(u,d,k))
→ (stack(u, a, i) ⊗ above(a,x) ⊗ !(a<x)) ⊗ top(b,j)
→ top(x,i) ⊗ stack(a,b,j)

I am not certain how the induction rule could be built to allow these functions to be plugged in, so we're stuck into this for a moment.

What am I doing? Neat calculus. So what? What's the point?

With this system, we can prove things about interactive systems. Are two interactions exactly the same interaction?

But if you know me, you know that I used to do programming with practical interests. I used to code Python and even attempted to build the "next" Python.

It often happened to me that my efforts with Python were doomed to fail since the beginning and I didn't just know it early enough. I occassionally created even quite complicated and interesting programs, just to see that they had fatal flaws that'd pop out when I depend on my own tools most.

I remember losing an AI-competition with Python because an exception handler was syntactically slightly malformed and meant different thing than what I meant. I also remember how I found bad flaws in a parser that I ended up using for months. I think that flaw is still in there somewhere and I can't be sure I got rid of all flaws. The choice between static and dynamic typing is a bit like choose your turing tarpit.

Sometimes you like to get incorrect and quick results. Eventually you just get tired though, that there are too many of those days when some mundane thing in your repertuare starts acting up weird.

I really don't care about commutativity of addition or about hanoi towers, or even about whether my software is going to be correct in future. I am looking for predictability and an ability to plan ahead a bit. I want to know what a program is doing before I run it, and I want to know which of those things I can be certain of.

Additionally solving this might have visible perks that eventually appear in the software you write. User experience often suffers when a program fails only after a long duration. If you've ever used Linux and compiled software you know what I mean. Wouldn't it be nice if the program stops before the actual flaw happens? Put your program run a theorem based on what the user wants to do and you get massively better behaving software.

Unfortunately, many theorem proving systems are big and complex and they also aren't designed to work this way. Also whole lot of things that we'd like to reason about are interactions between different sort of processes or elements of a system. I suppose founding such system on logic that can directly reason over interactions would be in place then, it it can be affordably done.

Here I got something that looks like it might work for that. I haven't figured out a great name for it yet though. I've considered naming it "puke", after Forth of course! Or maybe I should declare it to be "wrath", you know the result of computing circles_of_hell(Forth+1). I haven't made up my mind yet.

Similar posts