Sequents as a computational medium

In this post I treat sequent calculus as a computational medium and present cut elimination for it in Python programming language.

Sequent calculus may be used with different inference rule systems. Here we use linear logic because of its potential to model interactions. For now it's been limited to multiplicative fragment.

Bit of background

I examined interaction nets to implement cut elimination for a theorem prover and a programming environment. The interaction nets form a neat computational medium but it turned out to be excruciating to translate them back into easily presentable structured proofs.

Yves Lafont's "From Proof-Nets to Interaction Nets" presents a well-formedness -requirement for interaction nets that enclose them into little boxes. After trying to implement the well-formedness checker few times, I ended up trying experiments where the interactions nets preserve some structure that seemed necessary to produce convenient printouts of the proofs.

The structures I built started resembling one-sided sequent calculus used by Girard to present linear logic. The structure in splitting the rules allow them to be converted back to text.

Direct printouts of the structures will produce a rather natural-looking one-sided or two-sided sequent proof. If I eventually have to give up the pursue of the presentation that I want to generate, then this wouldn't be the worst presentation to stick on.

Programmatic presentation

The structure I am using is a sequent tree with connectives overlaid into it as a graph structure. Variables and propositions are replaced by lines. The connectives are attached to sequents and provide a clue of what kind of rule is used at each step of proving.

To prove the following (x -o y) -o (z -o w) -o x*z -o y*w, the program might present a following sequent tree:

----- ax   ----- ax
~x, x      ~y, y
---------------- (~x * ~y)     ----- ax
x, y, ~x * ~y                  ~w, w
------------------------------------ (y * w)   ----- ax
x, ~x * ~y, ~w, y * w                          ~z, z
---------------------------------------------------- (~z * ~w)
x, ~x * ~y, y * w, ~z * ~w, z
---------------------------------------------------- (x # z)
~x * ~y, y * w, ~z * ~w, x # z
---------------------------------------------------- ((x # z) # (y * w))
~x * ~y, ~z * ~w, (x # z) # (y * w)
---------------------------------------------------- ...
((~x * ~y) # ((~z * ~w) # ((x # z) # (y * w))))

This same thing is presented in Python code as the following structures:

x,y,z,w = Ax(), Ax(), Ax(), Ax()
a = Tensor(x.neg, y.neg)
b = Tensor(z.neg, w.neg)
c = Par(x.pos, z.pos)
d = Tensor(y.pos, w.pos)
cd = Par(c, d)
bcd = Par(b, cd)
abcd = Par(a, bcd)
p4 = Bq(z, Bq(Bq(x, y, a), w, d), b)
p0 = Uq(Uq(Uq(Uq(p4, c), cd), bcd), abcd)
conj_map = Root(p0)

The proposition to be proved is related to the presentation that I would actually want to use. For example, if I wanted to reason about chopping onions and cooking potatoes.

onions                    potatoes
----------- chop          --------------- cook
onion rings               cooked potatoes
-----------------------------------------
onion rings * cooked potatoes

The proof for conjunction-map would let me represent this proof as:

onions             potatoes
----------- chop * --------------- cook
onion rings        cooked potatoes

In short, I would have the proofs onions -o onion rings and potatoes -o cooked potatoes that I would plug into the proof. As the result I'd get onions*potatoes -o onion rings*cooked potatoes. Basically I'd get this by writing conj_map(chop, cook).

Such proofs allow translation of programs into different program representations. Eg. Find a proof that corresponds to the structure of what you're trying to compile from, apply the rule to the structure, simplify the structure on the another end.

Um... wait?

Technically I could translate the interaction nets into the proof structures using this same trick.

  1. Translate the input program into small proofs.
  2. Rewrite those proofs in the desired presentation.
  3. Glue the proofs together.

Unfortunately my source presentation is "inert" and therefore cannot simplify the proofs obtained this way. Of course if we translate back to interaction nets, the source presentation cannot be said to be "inert" anymore as it can simplify the proofs. However the simplified proofs would look like the interaction-net proofs clumsily translated into the source-proof-presentation.

The sequent-calculus medium can cut the cycle and provide those structured proofs. Or then I have to just directly solve the problem in the presentation I want! Though the whole point of the exercise is to remove this inability to simplify the proofs and there would be no longer any point to use the interaction net representation for that same purpose.

I'm excited about using this same idea to target low-level representations with a theorem prover, but I got to acknowledge this possibility makes it seem a lot more interesting than what it probably ends up being.

C-reduction

The point of the C-reduction is to implement cut-elimination.

Cut-elimination corresponds to β -reduction in lambda calculus. It happens by sliding the cut upwards in the proof. Since we got two different ways to slide the cut upwards, depending on which way we do it results in a proof that is technically same but structurally different.

The cut eventually hits the elements it cuts away. If the pair are 'par/tensor' -elements, the cut is replaced by two smaller cuts. If either one is an 'axiom' -element, then the axiom and cut themselves annihilate, leaving left whatever was on the ends connected.

L-reduction

The point of the L-reduction is to separate the structures that no longer interact for representation purposes.

Say we have a stack of 'par' -connectives and a tensor on top of them. The 'par' -connectives that can be moved inside 'tensor' should be moved there. This does not matter for sequent representation, but allows parallel objects to be identified when converting it back to the deep inference proof representation.

If there are Cut-objects in the proof, they are slid upwards.

If connectives in proof structure do not share interacting elements, they should be separated apart. This is extremely desired due to the separation of concerns. There are some trivial rules to approach this, but also a potentially difficult global analysis that reshapes the whole proof.

For example, if we find two axioms before a single tensor. Tensor can only interact with one of them, so the another can be slid below the tensor. Likewise if a tensor ends up separating this way, we can separate it down further.

I haven't implemented the L-reduction rules yet. The intent with them is to improve the proof presentation and I haven't a way to measure that yet.

What else isn't there?

If this turns out to be a well-working approach. I will attempt to implement the rest of the linear logic: Additives, exponentials and quantifiers.

How does it work?

You can obtain the little interpreter here: calculus.py

The interpreter doesn't print out anything. It comes with few test programs that try to ensure the reduction code works like it should. It runs few operations and checks the consistency of the structure.

The cut-elimination procedure is an iterator and returns the new cuts it produces. This means that you got to call it like list(cut_elimination(cut)) if you are not interested about the subsequent cuts.

Miscellaneous

The nondeterminism of the cut-elimination in sequent calculus of classical linear logic may limit the usefulness of the system as a theorem prover, but I doubt this is such an issue that cannot be defeated.

Now it's time to leave this aside for a moment. It's a nice weather outside.

Similar posts