Focusing proofs, explanation

Paper: "Logic Programming with Focusing Proofs in Linear Logic" Jean-Marc Andreoli.

Andreoli's focusing proofs work has practical applications in programming language design. The work shows that certain linear logic rules could be applied as soon as they're available because they are reversible.

In logic programming, proofs are answers to queries. In normalisation-as-computation languages such as ML, proofs are programs.

I'm writing down few illustrations of how the proof procedure based on Andreoli's work could prove propositions.

Proof examples

Additive and multiplicative propositions are distributive in linear logic.

a ∧ (b ⊔ c) ≡ (a ∧ b) ⊔ (a ∧ c)

We also have this linear distributivity between multiplicative propositions.

a ∧ (b ∨ c) -o (a ∧ b) ∨ c

Another consequence of linear logic and de-morgan duality is the ability to invert proofs by flipping them around and negating them.

(a -o b) -o (~b -o ~a)

Exponential isomorphism states that the distinction between additive and multiplicative operators collapse if contraction and weakening is allowed everywhere.

!(a ⊓ b) ≡ !a ∧ !b

Most of these rules are employed when proving linear logic propositions, so the proofs we are going to construct aren't exciting. Proving it all should be relatively mechanical, not requiring much thought.

Lets prove that we can invert proofs first. The inversion proposition can be desugared into:

(a ∨ ~b) ∨ (b ∨ ~a)
~~~~~~~~~~~~~~~~~~~

The proof procedure we use expects that you select the propositions that you work on. It doesn't directly apply the axioms described in the logic system.

Certain propositions such as 1, 0, a⊔b, a∧b and !a are synchronous whereas their duals are asynchronous. The topmost connective in the proposition is asynchronous. It allows propagating the selection deeper inside the proposition.

(a ∨ ~b) ∨ (b ∨ ~a)
 ~~~~~~     ~~~~~~

The procedure eventually reaches these atomic propositions:

(a ∨ ~b) ∨ (b ∨ ~a)
 ~   ~~     ~   ~~

The is associative and commutative. It means that we can shuffle and reorder the connective as we wish. Below left would be the full proof:

inversion proof

In practice we don't demand that the rules of associativity and commutation are described in full because application of the rule is trivial and unnecessarily verbose. We also no longer place parentheses there for the same reason.

Linear distributivity as an example

The linear distributivity is the next simple rule to prove. It desugars into.

~a ∨ (~b ∧ ~c) ∨ (a ∧ b) ∨ c
~~   ~~~~~~~~~   ~~~~~~~   ~

Now there are some synchronous connectives and all asynchronous connectives have been processed. The order in which way we get inside the expression doesn't matter, so we can solve the leftmost one first.

~a ∨ (~b ∧ ~c) ∨ (a ∧ b) ∨ c
~~    ^^   %%    ~~~~~~~   ~

The % in the notation describe that the proposition is not selected, but it'll have to be dealt with later. Now since the proposition we've examining on is atomic, it must be solved by switching some proposition in that it can interact with. Like this:

~a ∨ ((~b ∨ (a ∧ b)) ∧ ~c) ∨ c
~~     ^^   ^^^^^^^    %%    ~

Note the asynchronous connectives are treated before doing anything to the synchronous connectives.

~a ∨ ((~b ∨ (a ∧ b)) ∧ ~c) ∨ c
~~     ~~    ^   %     %%    ~
~~                           ~

The branch is examined again. We can eliminate the a here.

     ((~b ∨ (  ∧ b)) ∧ ~c) ∨ c
       ~~    ^   %     %%    ~
                             ~

This closes the branch, so we can take on the other side now.

     ((~b ∨ (  ∧ b)) ∧ ~c) ∨ c
       ~~        ^     %%    ~
                             ~

Finally the last atomic proof is eliminated.

     (                 ~c) ∨ c
                       ^^    ~

End of proof.

linear distributivity proof

The 's' rule used here is the linear distributivity rule. Use of such structural rules would be usually compacted as well, due to them being simple to derive. It's the same rule that was just proven, so we could have only generated this proof if we already knew beforehand the rule we were proving.

Distributivity as an example

Next, let's do the proof for distributivity. It would seem two proofs are necessary.

(a ∧ (b ⊔ c) -o (a ∧ b) ⊔ (a ∧ c))
(a ∧ (b ⊔ c) o- (a ∧ b) ⊔ (a ∧ c))

Desugaring these proofs results in:

(~a ∨ (~b ⊓ ~c)) ∨ (( a ∧  b) ⊔ ( a ∧  c))
( a ∧ ( b ⊔  c)) ∨ ((~a ∨ ~b) ⊓ (~a ∨ ~c))

For the first proposition to prove we end up with this:

 ~a ∨ (~b ⊓ ~c)  ∨ (( a ∧  b) ⊔ ( a ∧  c))
 ~~    ^^   %%      ~~~~~~~~~~~~~~~~~~~~~

The additive operator is asynchronous. Focusing during proving is this approach to apply the asynchronous rules immediately as they have been selected.

Next the distributive switching rule is applied.

  ~a ∨ ~b ∨ (( a ∧  b) ⊔ ( a ∧  c)) ⊓ ~a ∨ ~c ∨ (( a ∧  b) ⊔ ( a ∧  c))
  ~~   ^^   ~~~~~~~~~~~~~~~~~~~~~~~   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

The proof procedure would be less eager about this because it discovers only later that it also needs to switch the ~a in. The remaining available rule here is the additive operator and it's synchronous.

  ~a ∨ ~b ∨ (( a ∧  b) ⊔ ( a ∧  c)) ⊓ ~a ∨ ~c ∨ (( a ∧  b) ⊔ ( a ∧  c))
  ~~   ^^   ~~~~~~~~~~~~~~~~~~~~~~~   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

It has a rule that we can choose a branch. This produces a choice point.

  ~a ∨ ~b ∨ ( a ∧  b) ⊓ ~a ∨ ~c ∨ (( a ∧  b) ⊔ ( a ∧  c))
  ~~   ^^   ~~~~~~~~~   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

The rest of this was already seen in the linear distributivity example, so lets skip to the right branch.

  ~a ∨ ~c ∨ (( a ∧  b) ⊔ ( a ∧  c))
  ~~   ~~    ~~~~~~~~~~~~~~~~~~~~~

This time we have the same situation as before, except the first choicepoint fails.

  ~a ∨ ~c ∨ (( a ∧  c))
  ~~   ~~    ~~~~~~~~~

This is again the same thing. But just in case lets show how it goes:

  ~a ∨ ~c ∨ (( a ∧  c))
  ~~   ~~      ^    %

Then finally:

       ~c ∨         c  
       ~~           ^

Proving the right side ends up the same way.

Exponential isomorphism as an example

Finally lets look at the exponentials and prove the half of the exponential isomorphism.

!(a ⊓ b) -o !a ∧ !b

This seems like it could be a bit difficult.

?(~a ⊔ ~b) ∨ (!a ∧ !b)
~~~~~~~~~~   ~~~~~~~~~

How are we treating this? The ? means that we can use no copies, or as many copies as we need. We could just be lazy and keep the ? around until connective inside it is needed, then instantiate it.

?(~a ⊔ ~b) ∨ (!a ∧ !b)
~~~~~~~~~~    ^^   %%

?(~a ⊔ ~b) ∨ (!(a ∨ (~a ⊔ ~b)) ∧ !b)
~~~~~~~~~~     ^^^^^^^^^^^^^^^   %%

?(~a ⊔ ~b) ∨ (!(a ∨ ~a) ∧ !b)
~~~~~~~~~~     ^^^^^^^^   %%

?(~a ⊔ ~b) ∨ (!(  1   ) ∧ !b)
~~~~~~~~~~                %%

?(~a ⊔ ~b) ∨ (            !b)
~~~~~~~~~~                ^^

The second step is trivial as it's the same as before. Note that non-exponential expressions cannot be moved inside the exponential.

Not done with this

Writing of an automated theorem prover is a tedious task with Python. Lots of things are. I look toward to having something else to work with.

Fortunately I can limit it a bit and for now create a variation of this algorithm that only does proofs that do not require backtracking to solve.

Similar posts