Concurrent linear expression language
Classical linear logic does not impose a perspective for it's expressions.
This means that expressions such as A ⊸ A
for functions,
is the exact same as A ⅋ ~A
.
I interpret this property to mean that when you have a function "eat"
with type such as food ⊸ satiated
,
Meaning would be roughly "food leads to being satiated".
But you're also allowed to flip this around: ~satiated ⊸ ~food
"the demand for being satiated leads to demand for food".
With multiple variables the meaning of a clause
becomes even more complicated and multifaceted.
There is no obvious way to express this in linear lambda calculus, it presents the functions in perspective where they can be oriented exactly one way.
Now I describe a programming language that exposes this property through its type system.
The theory
The language is founded to these three deduction rules:
├ Γ,A,B ├ Γ,A ├ Δ,B ├ Γ,A ├ Δ, ~A
───────── I⅋ ────────────── I⊗ ──────────────── Cut
├ Γ,A ⅋ B ├ Γ,Δ, A ⊗ B ├ Γ,Δ
They are introduction rules for multiplicative operators and the cut rule.
Likewise we are also going to use the additive operators and need their introduction rules as well:
├ Γ,A ├ Γ,B ├ Γ,A ├ Γ,B
──────────── I& ───────── I⊕1 ───────── I⊕2
├ Γ, A & B ├ Γ,A ⊕ B ├ Γ,A ⊕ B
They form the rationale for superposing clause definitions.
Every proposition in linear logic has a dual:
A ⊗ B = ~(~A ⅋ ~B)
A ⊕ B = ~(~A & ~B)
We can use them to extend the rules for cut to eliminate the structures that are introduced by all the other rules.
This logic forms a type system for the language. Sequents correspond to the expressions in the language. The language allows you to build new expressions by connecting them together with variables. The above deduction rules describe which ways to connect expressions are correct.
Description of the language
The language is forming from expressions and variables. The variables represent communication channels while the expressions manipulate channels.
Declaration ::= Expression ':' Expression*
Expression ::= Term '(' Expression [','|';'] ... ')'
Expression ::= Term
Expression ::= Variable
Annotation ::= Term '::' TypeExpression
TypeExpression ::= TypeExpression ['⅋'|'⊗'|'&'|'⊕'] TypeExpression
TypeExpression ::= '~' TypeExpression
Here's an example of an expression that takes in input/output streams and uses them to produce a string:
prompt(In, Out; In_, Out_, String)
The comma presents multiplicative disjunction, and after the first semicolon it flips to represent conjunction. Further semicolons represent disjunction between conjunctions. The point in this is to allow easy expression and composition of commonly used forms that take in multiple values and return multiple values.
Connection of two clauses together by variables corresponds to the use of a 'cut' -rule. Outcome of this is that llprover can be used for typechecking clauses written in this language.
To understand what the point might be, consider the following two declarations:
a_clause(B,C,X,D):
A = plus(B, C)
D = plus(A, X)
b_clause(B,X):
A = plus(B, C)
C = plus(A, X)
The a_clause
typechecks because the clauses
connect together with the A
-variable and the channel flow in the program
can be determined.
The b_clause
does not typecheck because it would require that we apply the
cut rule on (A ⅋ ~C)
and (~A ⅋ C)
.
Examples
In the examples we represent type annotations like they do that in Haskell.
By writing the name and then double colons ::
, followed by the type annotation.
Lets start with the simplest function. This function connects two channels together. The channels become a single channel.
simplest(A, B):
A = B
simplest :: A ⅋ ~A
Here's an another function. This structure describes truth.
true(t)
true :: t
Control flow can be represented by superposing declarations for expressions.
Here's "AND":
and(f,f,f)
and(f,t,f)
and(t,f,f)
and(t,t,t)
and :: (t&f) ⅋ (t&f) ⅋ (t ⊕ f)
The 'and' program has several valid type annotations. It may be necessary to syntactically distinguish between different forms of superposing structures.
I find it a bit surprising that a type system could take in something that resembles a truth table and return me a type annotation for it. To see that the whole thing was plausible, I fed the following sequent to llprover:
(a0+b0+c0) /\ (a0+b1+c0) /\ (a1+b0+c0) /\ (a1+b1+c1) -> (a0 /\ a1) + (b0 /\ b1) + (c0 \/ c1)
The left side on that sequent describes the individual declarations and on the right side we have our own type declaration. The llprover proved that this clause is valid, so it seems like the type annotation is correct and that you could maybe implement 'and' this way.
You may see where this goes. Here's an another set of superposed declarations:
choice(t, left(A)):
A = 1
choice(f, right(A)):
A = 5
choice :: (t&f) ⅋ (left(Integer) ⊕ right(Integer))
We can compose 'true' and 'choice':
true(A)
choice(A, Result)
The type reduces like this:
true :: A:t
choice :: A:(t&f) ⅋ Result:(left(Integer) ⊕ right(Integer))
─────────────────────────────────────────────────────────── Cut A
Result:(left(Integer) ⊕ right(Integer))
Overall the rules follow from linear logic, you must have a reason to be able to destroy channels or duplicate them and it's not granted that you can destroy a channel.
Operation
Programs are inherently concurrent. Every operation must have a local effect, and the operation occurs immediately when it can.
To synchronize events, you have to tie their channels together. The underlying logic allows this to happen because you can signal that you're ready by connecting channels together.
I think it may not be sufficient to explain the operation just by fibonacci, but here's a fibonacci anyway:
fibonacci(A; R) | A <= 2: R = 1
fibonacci(A; R) | A > 2: R = fibonacci(A-2) + fibonacci(A-1)
fibonacci :: A:~Nat ⅋ R:Nat
I shorthand some operations here.
Y = a(X)
is supposed to be equivalent
to a(X; Y)
, so there should not be practical problem there.
Guards are shorthand so we don't need to do an operation
and then stratify that to another expression which chooses what to do.
We have duplication of 'a' here and recursion occuring.
It may cause problems in storing and describing the program.
In practice though, we cannot open the whole thing immediately because
there's choice involved between two set of scenarios here,
and we don't know which one happens until we do a computation.
Lets show the fibonacci(5; R)
.
fibonacci(5; R) | 5 > 2
───────────────────────────────────
R = fibonacci(5-2) + fibonacci(5-1)
───────────────────────────────────
R = fibonacci(3) + fibonacci(4) | 3 > 2, 4 > 2
──────────────────────────────────────────────
R = R1 + R2
R1 = fibonacci(3-2) + fibonacci(3-1)
R2 = fibonacci(4-2) + fibonacci(4-1)
────────────────────────────────────
R = R1 + R2
R1 = fibonacci(1) + fibonacci(2) | 1 <= 2, 2 <= 2
R2 = fibonacci(2) + fibonacci(3) | 2 <= 2, 3 > 2
─────────────────────────────────────────────────
R = R1 + R2
R1 = 1 + 1
R2 = 1 + R3
R3 = fibonacci(3-2) + fibonacci(3-1)
────────────────────────────────────
R = 2 + R2
R1 = 2
R2 = 1 + R3
R3 = fibonacci(1) + fibonacci(2) | 1 <= 2, 2 <= 2
─────────────────────────────────────────────────
R = 2 + R2
R2 = 1 + R3
R3 = 1 + 1
──────────
R = 2 + R2
R2 = 1 + 2
R3 = 2
─────────
R = 2 + 3
R2 = 3
──────
R = 5
I still don't have clarity over all properties of this system. The structure of the language does not mandate a direction yet the type system does not let you to trivially construct deadlock. But the story may be different when you add and use duplication, destruction or recursion. Most of those answers are already somewhere on a paper, probably. People have been studying linear logic for a good while.
I'm not planning to implement this at the moment. It looks like a good study vehicle though and I may end up having to implement at least the type system in some format.