Logic and recursion

I read Coq'Art this week, and then continued with the little typer.

The Coq'Art helped understand the Coq system, but despite that the rules of induction and coinduction still remained vague.

The little typer was a little bit too trivial and verbose to my tastes. It would be much shorter if it told everything directly instead of using the dialogue format. Despite this, the information told in the book is useful and it gets interesting once you get through the few first chapters.

Next week I'm going to read HOL theorem prover's logic manual. I'm hoping it adds something the earlier books didn't explain or mention well.

Euclidean algorithm Coq proof

In the last two posts I discussed Euclidean algorithm and it's formal proof. I mentioned I would try to write it in Coq.

Coq standard library had already provided a description of greatest common divisor and a certified program to compute it. The proofs were difficult enough that I didn't understand them well.

I discard that idea for a moment and see what else I have to learn first.

Recursive proofs & programs

I noticed that this one thing forms a harder problem than what I thought. At least it gets magnified a lot when examining formal proofs.

It is that recursive proofs are necessary, but introduction of recursion into proof systems destroy them. Assume you could prove things through recursion, applying the proof into itself, you can prove every statement trivially.

────── refer to itself
Absurd

The lambda calculus encoding for this would be the following program:

absurd :: _|_
absurd = absurd

And if you wanted to see the BASIC program encoding, it'd look like this:

10 GOTO 10

The recursion and looping are the problems in computing that correspond with the problems that formal proofs face.

I think the significance of this is same were you looking it from either programming or formal proof -sides. It is difficult to ensure that your program terminates if you have recursion or looping present.

Programmers tend to solve this problem by designing iterative constructs that are guaranteed to terminate. This seems to be the best way to solve the problem if it just happens to be an available option.

Curry discovered that we can introduce recursion into lambda calculus without having to write self-refering structures with the fixed point combinator.

f ↦ (x ↦ f (x x)) (x ↦ f (x x))

This combinator composes the function 'f' indefinitely many times. Beta reduction of the inner expression yields.

f ↦ f ((x ↦ f (x x)) (x ↦ f (x x)))
f ↦ f ∘ f ((x ↦ f (x x)) (x ↦ f (x x))))
f ↦ f ∘ f ∘ f ((x ↦ f (x x)) (x ↦ f (x x)))))

Select a suitable function and you get iteratively running programs this way:

fix f = (x ↦ f (x x)) (x ↦ f (x x))

length = fix (length ↦ list ↦ 
    ⎡ Nil       ⇒ 0               ⎤
    ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)

So in short, we can write a function that is self-referencing and loop it into itself using the combinator.

(List Nat -> Nat) -> List Nat -> Nat
──────────────────────────────────── fix
List Nat -> Nat

In short the rule is:

A -> A
────── fix
  A

And if you have this kind of a rule, you have introduced recursion into your proof system.

Here's a short trace about how this kind of a program computes values:

length [1,2]
──────────────────────────────────────────────────
fix (length ↦ list ↦ 
    ⎡ Nil       ⇒ 0               ⎤
    ⎣ a :: rest ⇒ 1+(length rest) ⎦ list) [1,2]
──────────────────────────────────────────────────
list ↦ 
    ⎡ Nil       ⇒ 0               ⎤
    ⎣ a :: rest ⇒ 1+(length rest) ⎦ list) [1,2]
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
⎡ Nil       ⇒ 0               ⎤
⎣ a :: rest ⇒ 1+(length rest) ⎦ [1,2]
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+(length [2])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+((list ↦ 
    ⎡ Nil       ⇒ 0               ⎤
    ⎣ a :: rest ⇒ 1+(length rest) ⎦ list) [2])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+(⎡ Nil       ⇒ 0               ⎤
   ⎣ a :: rest ⇒ 1+(length rest) ⎦ [2])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+1+(length [])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+1+((list ↦ 
    ⎡ Nil       ⇒ 0               ⎤
    ⎣ a :: rest ⇒ 1+(length rest) ⎦ list) [])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+1+(⎡ Nil       ⇒ 0               ⎤
     ⎣ a :: rest ⇒ 1+(length rest) ⎦ [])
    where length =
        fix (length ↦ list ↦ 
            ⎡ Nil       ⇒ 0               ⎤
            ⎣ a :: rest ⇒ 1+(length rest) ⎦ list)
──────────────────────────────────────────────────
1+1+0
──────────────────────────────────────────────────
1+1
──────────────────────────────────────────────────
2

Type-inferencing the expression as typed lambda calculus reveals interesting details about the fixed point combinator.

        x x   :: c = c -> d               x x   :: f = f -> h
        ───                               ───
     f (x x)  :: a = d -> b, d         f (x x)  :: a = h -> g, h
─────────────                     ─────────────
(x ↦ f (x x)) :: c -> b           (x ↦ f (x x)) :: c = f -> g
─────────────                     ─────────────
(x ↦ f (x x))                     (x ↦ f (x x)) :: c -> b, c
───────────────────────────────────────────────
f ↦ (x ↦ f (x x))                 (x ↦ f (x x)) :: a -> b

Listing out all the constraints in the above inference proof, we get:

c = c -> d
a = d -> b
f = f -> h
a = h -> g
c = f -> g

We can do constraint elimination to retrieve:

──────────────────────────── elim (c := f -> g), elim (a := h -> g)
f -> g = (f -> g) -> d
h -> g = d -> b
f = f -> h
──────────────────────────── elim ->
f = f -> g
g = d
h = d
g = b
f = f -> h
──────────────────────────── elim (g := d)
f = f -> d
h = d
d = b
f = f -> h
──────────────────────────── elim (h := d)
f = f -> d
d = b
f = f -> d
──────────────────────────── elim (d := b)
f = f -> b
f = f -> b
──────────────────────────── remove duplicate
f = f -> b
──────────────────────────── rename b to a
f = f -> a

a -> b rewritten to...
───────────── elim (a := h -> g)
(h -> g) -> b
───────────── elim (g := d)
(h -> d) -> b
───────────── elim (h := d)
(d -> d) -> b
───────────── elim (d := b)
(b -> b) -> b
───────────── rename b to a
(a -> a) -> a

We obtain a fixed point constraint that we can't reduce f = f -> a. It remains inside the fixed point combinator. We also obtain the type: (a -> a) -> a.

Despite recursion present, the proofs may remain valid or invalid depending on how they behave.

Just like how we can isolate the recursion into a fixed-point combinator, we can do it for well-founded recursion as well. In the little typer -book they do this to introduce inductive definitions.

Python programmers can be perceived to isolate well-founded recursion with the iterators:

for x in range(10):
    print x

The above python statement prints numbers from 0 to 9 and the programmer do not need to worry about termination proofs.

A little thing about encoding proofs

I think these all little models and systems are excellent study subjects. Due to their simplicity they reveal all sort of problems in such a way that they can be studied in detail they deserve.

Though when examining systems such as lambda calculus. I think it would be important to people to focus on the proofs they encode. Their computational meaning isn't often as interesting or easy to understand.

For example, here's the syntax of simply typed lambda calculus with the proofs that are encoded.

x : T ∈ Γ
───────── variable, 'x' with type T appears in environment
Γ ├ x : T

Γ ├ a : V -> T   Γ ├ b : V
────────────────────────── apply
Γ ├ a b : T

Γ, x : X ├ e : Y
────────────────── abstract ('x' introduced to the environment)
Γ ├ λx.e : X -> Y

There are also other ways to represent the proofs, but this is an one way to do it. For example, we might like to try to do it with tools available from deep inference:

────────── variable (atom introduction)
~x:T ∨ x:T 

(a:V -> T) ∧ b:V
──────────────── apply
a b : T

~x:X ∨ e:Y
───────────── abstract
λx.e : X -> Y

~x:X ∨ ~x:X
─────────── contract
   ~x:X

Empty
───── weaken
~x:X

(A ∨ B) ∧ C
─────────── structural
A ∨ (B ∧ C)

Units:
    Empty ∨ X = X
    Unit  ∧ X = X
(∧), (∨): commutative, associative

Deep inference gets its name from the idea that the rules of inference can be applied inside the structures rather than only on the top level. It also gets rid of sequents.