BCI combinators & Linear logic

It's well-known that with B, C, I combinators you can encode some of linear logic proofs.

B : (a -> c) -> ((a -> b) -> (a -> c))
C : (a -> (b -> c)) -> (b -> (a -> c))
I : (a -> a)

These combinators allow you to generate structures that resemble multiplicative operators, but they do not seem to be enough to introduce additive operators. To introduce those it seems to be necessary to introduce an additional construct that introduces them. Here's the reduction rules for the additive constructors/destructors:

{a, b} c ⇒ {a c, b c}
fst {a, b} ⇒ a
snd {a, b} ⇒ b

These constructs allow you to encode a possibility for a choice that gets resolved through the fst and snd. I think people have known how to do this, but few still know how this can be done.

We can define quite simple typing rules for these objects:

a      :: A
b      :: B
{a, b} :: A & B
fst    :: (A & B) -> A
snd    :: (A & B) -> B

The fst and snd church-encode booleans.

The structures derived this way can be duplicated and destroyed easily. It allows church-style encoding for numbers and other inductively defined structures, such that they remain available for duplication and discarding.

Victor Maia's recently introduced Symmetric Interaction Calculus proposes you can also copy lambdas around by superpositioning. I haven't yet figured out how to encode its rules in terms of BCI.

And of course you want to try the stripped-down W combinator. It requires that the argument function is duplicated. Here's how it's reduction went following rules Maia proposes in his blog:

(λx. x x) (λy. y y)
───────────────────
x = (λy. y y)
x x
───────────────────
x = (λy. y y) (z&w)
(λz.x) (λw.x)
───────────────────
x = (λy. y y) (z&w)
(λz.x) (λw.x)
───────────────────
x = (y = z&w; y y)
(λz.x) (λw.x)
─────────────────────────────────
x = (y = z&w i&j; (λi.y) (λj. y))
(λz.x) (λw.x)

This ended up stuck because I refuse rewriting the code into recursive form. I find it a bit odd though.

Maybe there's a reason it could work. You see the lambda ends up actually used only once.

Recently I've been really interested about this linear logic, and these combinators was the tool I managed to use to dig into the subject.

I expect this to be interesting after reading the "Linear Logic for Non-Linear Storytelling". That paper suggests this kind of logic can correspond actions to proofs.

The paper also suggests why pure lambda calculus or pure prolog must expose the evaluation strategy before side-effectful programs make sense. Maybe intuitionistic logic does not sufficiently express causes and effects.