Ensuring correctness of my logic programming implementations

I haven't been certain whether the resolution algorithms I sketched has been correct and most likely it isn't. Few weeks ago I figured out a way to verify correctness of logic programming implementations.

Logic programming environments are given a query. Eg. ∃A,B. append(A,B,[2,3])). The answers to this query would be:

append([], [2,3], [2,3]) {A=[], B=[2,3]}
append([2],  [3], [2,3]) {A=[2], B=[3]}
append([2,3], [], [2,3]) {A=[2,3], B=[]}

The query is a proposition and the environment finds a proof for it. To illustrate lets show the first proof as a sequent.

-----   -----------
[]=[] ∧ [2,3]=[2,3]
---------------------------------------------------------------------------- choose left
([]=[] ∧ [2,3]=[2,3]) ⊔ (∃H,L,R. []=[H|L] ∧ [2,3]=[H|R] ∧ append(L,[2,3],R))
---------------------------------------------------------------------------- open
append([], [2,3], [2,3])
------------------------ ([2,3]/B)
∃B. append([],B,[2,3]))
------------------------ ([]/A)
∃A,B. append(A,B,[2,3]))

There are slew of insights here remaining to be picked up.

  1. Quantifier substitutions must occur in order, eg. the quantifier above cannot refer to the quantifier below it.
  2. After a certain point variations in the proof no longer result in new proofs that would vary from the existing answers.
  3. Expression reduces to being trivial to prove.

The proof is constructed from the rules of the logic used as the foundations. If we can verify that a resolution algorithm follows these rules, then we know that it's correct up to this definition.

Abstract syntax

These are the main connectives of the language:

true
~true
X = Y
X ≠ Y
A ∧ B
A ∨ B
∀X. A
∃X. A
A ⊓ B
A ⊔ B
~A    (rewritten away with de-morgan rules)

The following dualities apply:

~(true) ≡ ~true
~(X = Y) ≡ X ≠ Y
~(A ∧ B) ≡ ~A ∨ ~B
~(A ⊓ B) ≡ ~A ⊔ ~B
~(∀X. A) ≡ ∃X. ~A

The constructors for the propositions in the language:

(true)  :: Prop
(~true) :: Prop
(=) :: Term -> Term -> Prop
(≠) :: Term -> Term -> Prop
(∧) :: Prop -> Prop -> Prop
(∨) :: Prop -> Prop -> Prop
(⊓) :: Prop -> Prop -> Prop
(⊔) :: Prop -> Prop -> Prop
(∀) :: (Term -> Prop) -> Prop
(∃) :: (Term -> Prop) -> Prop

Terms are Herbrand structures, consisting of a finite number of constants and function terms.

On-the-paper proofs

We can write and work out the proofs on the paper. The rules are clear for when we have succeeded, although we don't need an algorithm when working it out on the paper.

To illustrate how this approach works, lets do an example:

append(X,Y,Z) :=
      (X=[] ∧ Y=Z)
    ⊔ (∃H,L,R. X=[H|L] ∧ Z=[H|R] ∧ append(L,Y,R))

? ∃A,B,C. ~append(A,B,C)

There exists a dual for append that we can write out as:

~append(X,Y,Z) :=
      (X≠[] ∨ Y≠Z)
    ⊓ (∀H,L,R. X≠[H|L] ∨ Z≠[H|R] ∨ ~append(L,Y,R))

Now we can use that rule:

        (A≠[] ∨ B≠C) ⊓ (∀H,L,R. A≠[H|L] ∨ C≠[H|R] ∨ ~append(L,B,R))
        -----------------------------------------------------------
∃A,B,C. ~append(A,B,C)

No matter how deep it goes, it isn't going to be productive run, so we leave with no choice but constrain the query with itself.

∃A,B,C. ~append(A,B,C) {~append(A,B,C)}

The situation changes considerably if any of the three variables will be subsequently constrained.

Trivialities and equivalences

If we know that some statement is certainly 'true' or 'false', we can rewrite it with certainty and drop it out. Eg. A ∧ false ≡ A ⊓ false ≡ false.

Expressions of form f(î) = f(ĵ) can be rewritten to form i0 = j0 ∧ i1 = j1 ∧ .... Respectively the corresponding -expressions can be rewritten to -sequences in a similar manner.

Quantifier elimination

Quantifiers can be eliminated through selection. Eg.

A[t/X]
------ [t/X]
 ∃X. A

∃X. A ≡ A[t0/X] ⊔ A[t1/X]... ⊔ ∃X. A{X≠t0..}
∀X. A ≡ A[t0/X] ⊓ A[t1/X]... ⊓ ∀X. A{X≠t0..}

Note that it's also important to cover the case that no selection is made. It's easily done by keeping the quantifier and memoizing the previously selected values of the variable.

Trivial eliminations occur when the quantifier is bound suitably high in the proposition.

∃X. X=Y ∧ B
∀X. X≠Y ∨ B

If the contents of the quantifier would trivialize were the quantifier bound differently, then we can immediately bind the quantifier and get rid of it.

Quantifiers can climb up in the expression if they no longer appear in parts of their body proposition. Though they can't climb past opposing quantifiers.

Constrainted recursion

To solve the problem with non-productive recursion, we can introduce a rule that the clause is opened up to a recursion point, but if no productive rule is found to the next iteration then it's closed back up.

Let's take an another example with member.

member(X,Z) := ∃Y,T. Z=[Y|T] ∧ (X = Y ⊔ member(X, T)) 

~member(X,Z) := ∀Y,T. Z≠[Y|T] ∨ (X ≠ Y ⊓ ~member(X, T)) 

? ∃L. ~member(5, [3,2|L])

Now there is clear productivity present in the derivation.

          3 ≠ Y ∨ [2|L]≠T
          ---------------
∃L. ∀Y,T. [3,2|L]≠[Y|T]   ∨ (5 ≠ Y ⊓ ~member(5, T)) 
---------------------------------------------------
? ∃L. ~member(5, [3,2|L])

3 ≠ Y and it's pair would trivialize the expression if true, so we can bind.

    false
    -------------------
    3 ≠ 3 ∨ [2|L]≠[2|L]    true
    -------------------    -----
∃L. [3,2|L]≠[3,2|L]]    ∨ (5 ≠ 3 ⊓ ~member(5, [2|L])) 
-----------------------------------------------------
? ∃L. ~member(5, [3,2|L])

Subsequently more expressions trivialize and now we have:

∃L. ~member(5, [2|L])
-------------------------
? ∃L. ~member(5, [3,2|L])

Well the next step would be the same:

∃L. ~member(5, L)
-------------------------
∃L. ~member(5, [2|L])
-------------------------
? ∃L. ~member(5, [3,2|L])

And finally we can check.

∃L. ∀Y,T. L≠[Y|T] ∨ (5 ≠ Y ⊓ ~member(5, T)) 
-------------------------------------------
∃L. ~member(5, L)

There is no longer any productivity present so it can be closed back up. The final answer is:

∃L. ~member(5, [3,2|L]) {~member(5,L)}

Similar posts