Illative combinatory logic
In the Journal of Symbolic Logic (March 1996) there is a paper that describes systems of illative combinatory logic that are consistent. It was difficult to understand, so I wrote some notes in case I forget it again.
The illative combinatory logic is an extension to lambda calculus. These days we are familiar with computers and we understand lambda calculus so well in that context that it may cause confusion when it is used in an another one. In illative logic the terms are considered to have assertive value. And there is the following terms and their intuitions:
Xz says "z is of type X", or z ∈ X
Lx says "x is a type".
ΞXY is interpreted as X ⊆ Y, or (∀x ∈ X)Yx
The term λa.X corresponds to class {a|X}
Hx = L(Kx), Hx says "x is a proposition"
The system allows you to rewrite the terms in the same way as it happens in lambda calculus. We are allowed to use beta and eta-reduction on these terms:
(λx.M)N → M[x := N]
λx.Mx → M {x not in FV(M)}
For example, say we have:
(λx.a x) b
Using beta reduction we get:
(a x)[x := b]
send nudes
But we may also use eta reduction and directly get:
a b {x is not in FV(a)}
Here we may have a confusion. It's because although we reduced the term nothing happened to the original one. Even if it was reduced down into more direct one. We get the following terms after the beta/eta reduction:
(λx.a x) b
a b
Another confusing thing may be the assertive value of the lambda abstractions. It means that we are representing set building notation through functions. Since all of our terms are potentially types, it means that lambda functions represent classes here.
The deduction rules also allow using beta reduction to do backward-reduction. We may wrap clauses into meaningles lambdas or abstract out equal elements from them.
The symbol 'Xi' (Ξ)
is an assertion that means that the
terms of one type are also terms of an another type.
I don't entirely understand it myself,
but it appears we can use this to represent implication
in combination of the K-combinator.
X ⊃ Y = Ξ(KX)(KY)
I suppose the idea here is that we can use anything to open the KX
.
This then reduces to plain X
.
If we can prove X
somehow,
then we can use that same thing to open the KY
and prove Y
as well.
Rest of the connectives?
The paper describes that we can get rest of the logical connectives by deriving them from implication and universal quantification.
I guess it's easy once you understand the rules of intuitionistic logic.
Sound interpretation
Here are one set of deduction rules that provide a sound interpretation for the illative logic:
X ∈ R ⇒ R |- X
R |- X, X = Y ⇒ R |- Y
R |- Ξ(λx.X)(λx.Y), R |- (λx.X)V => R |- (λx.Y)V
R, X |- Y, R |- L(λx.X), x not in FV(R) => R |- Ξ(λx.X)(λx.Y)
R, X |- HY, R |- L(λx.X), x not in FV(R) => R |- H(Ξ(λx.X)(λx.Y))
If you find these difficult to understand, it is worthwhile to study sequent calculus.
Update: I was told that these are natural deduction rules, but it seems to only mean that there is exactly one asserted proposition on the right side of the turnstile.
How does the rules block paradoxes?
The type constraint that states that the basis must contain
the proposition that X is a proposition, would appear
to block the paradoxes.
For example,
the Curry's paradox in the paper described in the paper
derive to the point that we may conclude X
from Y
,
but we cannot generalize the rule,
because Y
is not a proposition.
Y = (λy.(yy) ⊃ X)(λy.(yy) ⊃ X)
Y = (λy.(yy) ⊃ X)(λy.(yy) ⊃ X) ⊃ X
Y ⇔ Y ⊃ X
Y ├ Y Y ├ Y ⊃ X
────────────────────────────
Y ├ X
Therefore we cannot do this deduction, which would lead us to prove anything with our system.
Y ├ X ├ L(λx.Y)
──────────────────
├ Y ⊃ X
─────── Y ⊃ X ⇔ Y
├ X