# Exploring normalisation of Linear Logic

I've been trying to implement a normaliser for Linear Logic. So far I've been struggling. I'm not sure how to get the results comparable to lambda calculus while keeping the system in the shape I'd like to keep it in.

Deep inference open deduction rules present the duality in the linear logic neatly and otherwise seem suitable for representation of proofs on a computer.

The abstract syntax:

proof := prop rule ";" prop
| prop rule ";" proof
rule := ε
| term
| rule(term)
prop := proof
| term:prop
| term(prop*)
| prop ∧ prop
| prop ∨ prop
| prop ⊓ prop
| prop ⊔ prop
| !prop
| ?prop
| ∀x:T.prop
| ∃x:T.prop
| ~prop
| 1
| 0

It should cover the linear logic in whole. Additionally I'd like to later provide in concepts from type theory, so that we could construct proofs such as:

1
------------- axiom
x ∨ ~x
-------------
~x
--------- same
x=x
∀x:A. x ∨ --------- [y/x]
∃y:A. x=y
-------------------
∀x:A. x ∨ ∃y:A. x=y

I'm not sure it'd be this easy, but I suppose it could be fairly easy.

The language would encode it like the following:

∀x:A. (1 axiom; x ∨ (~x; same(x); x=x subs(x); ∃y:A. x=y))

This way perhaps, you could use the ideas from type theory to prove theorems with linear logic.

## Inference of empty rules

If I track the occurrences of propositions in the rules, I have a relatively easy way to determine which rule is used and label it in before normalising the proof.

If the proof's succedent uses the multiplicative disjunction (∨), I know that it's some of the following rules:

1
----- axiom
ã ∨ a

(a ∨ c) ∧ b
----------- switch
a ∧ b ∨ c

!(a ∨ ?b)
--------- exponential injection
!a ∨ ?b

If the proof's succedent uses the additive disjunction (⊔), it must be some of the following:

a
----- select
a ⊔ b

(a ⊔ b) ∧ c
----------------- additive injection
(a ∧ c) ⊔ (b ∧ c)

If the additive disjunction is on the antecedent side, it likely is the rule:

a ⊔ a
----- merge
a

If the negative exponential ? is on the succedent, the rule must be some of the following:

a
-- create
?a

~1
-- weaken
?a

?a ∨ ?a
------- contraction
?a

Remaining operations may be explicitly marked:

1
-- exponent introduction
!1

A[x/t]
------ existential introduction (variation of select?)
∃x.A

## Normalisation of proofs

In this kind of systems the normalisation corresponds to computation and is necessary during theorem proving.

I can imagine few systems that the linear logic can encode to, but I don't have an idea of how to do it in reverse and decode the results back to linear logic. Then there's the thing how to verify that the whole thing is having the properties it should have, eg. it terminates and is perhaps strongly normalizing.

The papers I've read such as Girard's geometry of interaction have been beyond my understanding.

It'd seem that conversion into some other format is necessary, but I'm not certain about the details.

## (So far) a failed attempt at writing a normalisation algorithm

I describe what I tried so far. I still haven't completely run out of ideas of what to do yet. Usually just describing the problem you have helps a bit.

So the obvious way to normalize linear logic would be to apply ideas from Girard's proof nets or geometry of interaction. Part of the problem is that I don't entirely understand all of this.

I do understand that I can implement a mechanism called a 'wire'. Wire has endpoints, and you might visually describe them like this:

~a|---------|a

When you plug two wires together, they annihilate and become a new wire.

~a|---------|a    ~a|---------|a

~a|---------------------------|a

There are wires that have empty ends. When you connect such wires together, they annihilate entirely.

o--------|1     ~1|--------o

We have variations of that wire, such that they multiplex wires. When connected, they annihilate but form multiple new wires.

~a|----|                   |----|a
|---|a∧b   ~a∨~b|---|
~b|----|                   |----|b

~a|-----------------------------|a
~b|-----------------------------|b

Finally I could have few more wires, but these seem especially troublesome and complicated to implement.

I could have a wire that produces copies from what is plugged into it, when the action is resulting in further computation.

~1|-----|
~1|-----****|1     ~1|-------o
~1|-----|

~1|-----o
~1|-----o
~1|-----o

Then I could have a wire that chooses among items.

~a|-------|     a|-/--------|a∨~c
|--------|b∨~c
\--------|c

~a|----|           x------|b∧~c
|---|a∨~c
c|----|

For the quantifiers I could have a plug that instantiates a variable and another end that responds to it. In one way the structure I've chosen fairly naturally represents the idea of embedding proofs inside propositions. But here's another problem: The instance I would have to copy is not a term but a wire.

It is a bit of a mystery what kind of a wiring the quantifiers should hold. I suppose some variable-filled annotations would be involved there?

I can associate a wire for each proposition in the proof. So then for each proof I can do different wirings. I suppose I could get it to work, but I can see how the implementation of exponentials, quantifiers and choice make it a much more difficult.

I'm not certain how this kind of a wiring diagram would translate back to a proof.