A peek inside llprover's innards

LLProver is a linear logic prover by Naoyuki Tamura. This software has been great help when studying linear logic. I thought it was worthwhile to look inside it as well.

This program can produce derivation trees for expressions that can be proven directly from the rules of linear logic. It supports different subsets of that logic. Below's an example session using the SWI-prolog version:

$ prolog -g "ll." llprover.pl
Linear Logic Prover ver 1.3 for SICStus Prolog
        http://bach.seg.kobe-u.ac.jp/llprover/
        by Naoyuki Tamura (tamura@kobe-u.ac.jp)
ill(0)> a, b --> a*b.
a,b-->a*b.
Trying to prove with threshold = 0
Succeed in proving a,b --> a*b (0 msec.)
twosided:pretty:1 =
------- Ax  ------- Ax
a --> a     b --> b
------------------- R*
    a,b --> a*b
yes

The software has excellent documentation and I don't have interest in going to describe details of how this program works, but I will pick up some things that I found interesting.

The SWI-prolog version of the llprover was the version I studied.

Structure of the program

The program is roughly divided into several smaller parts that are clearly visible in the code. It is structured like this:

I was only interested about the proof search and how the deduction rules were encoded.

The proof search resembled that of prolog, except that it keeps collecting a proof while proving.

Iterative deepening

The proof search is implemented with iterative deepening. Variables N and M are taking care of this inside the prove/3 -clauses.

Basically we care about finding the simple proofs first, therefore the system searches for the proofs of depth 0 first, then tries to find the proofs of depth 1, and keeps going on until it hits the limit or finds a proof.

Essentially it's equivalent to the breadth-first search but it's using a lot less memory in relation to what it finds, in practice. Especially if the depth is kept relatively low. This is reasonable because useful proofs wouldn't likely be very deep.

Proof focusing

Proof focusing is the interesting part. It's slightly hard concept to explain but otherwise the whole thing is simple to understand.

If you had a list [a, b, c, d, e], and you're tasked to remove b,c,d from this list. You end up with 3*2*1 different ways how you can do this task, but every time you end up with the result [a, e]. This happens because the application of each removal does not affect the others.

This also happens in Linear logic proof search, and the manifestation is even worse because these "easy" derivations intermix with the difficult derivations and produce permutations that aren't useful to explore and can be pruned away.

The proof focusing identifies the "easy" rules and invoke them before others, without giving them a chance to cause unnecessary search permutations inside the proof search.

The select_rule/4 and rule_constraint/3 seem to take care of proof focusing.

Rule representation

The rules used by the prover are encoded into the program. Here's the clause that applies a reduction rule for the additive conjunction:

rule([ill,0], inv, r(/\), S, [S1, S2], [r(N),r(N),r(N)]) :-
    match(S,  ([X]-->[Y1,[A/\B],Y2])),
    match(S1, ([X]-->[Y1,[A],Y2])),
    match(S2, ([X]-->[Y1,[B],Y2])),
    length(Y1, N).

Clause's arguments in the order:

  1. Identifies the system where this rule holds. Different systems subsume different logics.
  2. Identifies whether the rule is "invertible" and require the proof focusing to be applied.
  3. Rule's name, that gets written to the derivation tree.
  4. Input sequent.
  5. Output sequents that are left to be proven.
  6. Position of the rule. It's not necessarily clear what this means without examining the program when it's running, so that's going to be done next.

The match/2 appears to be provided by the program and its utilities. The definition is:

match((X-->Y), (P-->Q)) :- append_all(P, X), append_all(Q, Y).

append_all([], []).
append_all([P], P).
append_all([P|Ps], X) :- Ps=[_|_], append(P, X1, X), append_all(Ps, X1).

append([], Z, Z).
append([W|X], Y, [W|Z]) :- append(X, Y, Z).

The length/2 can be found from the SWI Prolog manual. In the rule it states that the Y1 has the length N.

Apparently every Prolog implementation has not provided these correctly, so the author has to have had written these in. The match appears to locate and catenate lists, that's all it does.

To clarify myself what's happening here, I did a trace for the input a->b/\c --> (a->b)/\(a->c). Here's the result:

S   = [a->b/\c]-->[ (a->b)/\ (a->c)]
S1  = [a->b/\c]-->[a->b]
S2  = [a->b/\c]-->[a->c]
N   = 0
X   = [a->b/\c]
Y1  = Y2    = []
A   = a->b
B   = a->c

So it appears that the 'Pos' ends up being [r(0), r(0), r(0)]. Where does it end up to? It doesn't seem to be used in the subsequent proving and ends up directly into the result.

Here's the examination from end of the first clause of the prove/3.

S    = [a->b/\c]-->[ (a->b)/\ (a->c)]
P    = [r(/\), [r(0), r(0), r(0)], ([a->b/\c]-->[ (a->b)/\ (a->c)]), [r(->), [r(0), [l(...)|...]], ([... -> ...]-->[... -> ...]), [l(...)|...]], [r(->), [r(0), [...|...]], ([...]-->[...]), [...|...]]]
N    = M = N1 = 0
Rule = r(/\)
Ss   = [ ([a->b/\c]-->[a->b]), ([a->b/\c]-->[a->c])]
Pos  = [r(0), r(0), r(0)]
Ps   = [[r(->), [r(0), [l(0), r(0)]], ([a->b/\c]-->[a->b]), [l(->), [l(1), r(...)|...], ([...|...]-->[...]), [...|...]|...]], [r(->), [r(0), [l(0), r(0)]], ([a->b/\c]-->[a->c]), [l(->), [l(...)|...], (... --> ...)|...]]]

The 'Pos' seems to be in the proof tree:

P = [
    r(/\),
    [r(0), r(0), r(0)],
    ([a->b/\c]-->[ (a->b)/\ (a->c)]),
    [r(->), [r(0), [l(...)|...]], ([... -> ...]-->[... -> ...]), [l(...)|...]],
    [r(->), [r(0), [...|...]], ([...]-->[...]), [...|...]]]

So it's very clear from here! It tells how the proof was applied. r(0) tells that 0 sequents were skipped before the rule was applied to a right-sided sequent. It also appears to tell where the remaining pieces left to proof are coming from. That's why it's repeated thrice.

Miscellaneous thoughts

The project uses "red" cuts a lot. Meaning that most of the cuts are not only there for pruning out useless searches, they also change the behavior of the program. For example the proof focusing is entirely implemented as a cut in the proof procedure.

Now that I've studied this and written down the findings, I think it'll be easier for me to apply the ideas of linear logic elsewhere.

Similar posts