Session types and computability logic

In 1993, Kohei Honda introduced session types in his paper "Types for Dyadic Interaction".

Languages using session types has been appeared recently. Here's a simple example program written in Concurrent C0, from Max Willsey's thesis (2016).

typedef <!int; !bool; ?int;> protocol;

protocol $c server() {
    send($c, 42);
    send($c, true);
    printint(recv($c));
    close($c);
}

int main() {
    protocol $c = server();
    int i1 = recv($c);
    bool b = recv($c);
    send($c, i1+1);
    wait($c);
    return 0;
}

C0 is a simplified variation of C used for teaching. Concurrent C0 extends it by introducing session types. You can find a session type in the typedef.

<!int; !bool; ?int;>

This describes a channel that sends you integer and boolean, then you have to send it an integer and close the channel.

The communication over channels relies on linear typing, because the type of the channel changes during the communication. Session typed languages are tracking this change in type statically and reject programs that violate protocols.

In the example the server sends 42, true through the channel and the entry point receives them from the channel and stores them to the i1 and b variables. The entry point responds by incrementing i1 and sending it back. Server is then printing the value, writing out 43. Finally the entry point waits for the server to close, and the server is closing down, in conformance to the protocol that was described.

Channels behave a lot like functions in functional programming, but they expose a shortcoming in intuitionistic type theory.

Lambda calculus at interaction

Lambda calculus is a formal system used for representing computation.

E := λx.E
   | E E
   | x

When variables are given types, this calculus yields a correspondence with mathematical proofs given in intuitionistic logic.

The abstraction introduces implication.

    A ├ B
------------- implication introduction
    A -> B

x : A ├ e : B
------------- implication introduction
λx.e : A -> B

The application introduces the cut rule.

A -> B             A
-------------------- cut
          B

λx.e : A -> B  a : A
-------------------- cut
    (λx.e) a : B

Reduction operations correspond to the reduction of the proof.

If you look at the above rules, the model does not describe interaction with the outer world.

We can introduce the interaction by introducing sending and receival:

E := $ E
   | ?x. E
   | done

It remains a question what type should these expressions have? Why should the interaction be introduced in exactly this way? These are good questions, but I can't answer them yet.

Bringing the function down to weak-head normal form is sufficient to reveal either sending or receival operation in an expression.

Here's the "server" represented using this extension of lambda calculus:

($ ($ ?x. done) true) 42

We can use another expression to respond to it:

?i1. ?b. ($ done) (i1+1)

The reduction path could be:

($($?x. done) true) 42
?i1. ?b. ($ done) (i1+1)
------------------------ transfer 42
($?x. done) true
?b. ($ done) (42+1)
------------------------ reduce (42+1)
($?x. done) true
?b. ($ done) 43
------------------------ transfer true
?x. done
($ done) 43
------------------------ transfer 43
done
done
------------------------ close session
done

Of course we could also interact directly with the program, or have operating system interact with the program.

IO monad in haskell

There are ways to deal with the problem of interaction without weaning off from intuitionistic logic or intuitionistic type theory.

IO monad in haskell introduces interaction with the environment without extending the calculus.

getLine >>= putStrLn :: IO ()

Substituting the getLine and putStrLn with interaction expressions, then giving an implementation for >>= allows us to convert this program into interactive form.

getLine >>= putStrLn
---------------------------------------- substitute all
(λa. λb. λc. a (λn. b n c))
(λcont1. ($?y. cont1 y) getLine)
(λx.(λcont2. ($?z. cont2 z) putStrLn x))
---------------------------------------- reduce λa, λb
λc.
  (λcont1. ($?y. cont1 y) getLine)
  (λn. (λx.(λcont2. ($?z. cont2 z) putStrLn x)) n c)
---------------------------------------- eta reduce λx
λc.
  (λcont1. ($?y. cont1 y) getLine)
  (λn. (λcont2. ($?z. cont2 z) putStrLn n) c)
---------------------------------------- reduce λcont1
λc.
  ($?y. (λn. (λcont2. ($?z. cont2 z) putStrLn n) c) y) getLine
---------------------------------------- eta reduce λcont2
λc.
  ($?y. (λn. ($?z. c z) putStrLn n) y) getLine
---------------------------------------- eta reduce λn
λc. ($?y. ($?z. c z) putStrLn y) getLine

Now if we call the function with λa. done, we get the resulting program:

($?y. ($?z. (λa. done) z) putStrLn y) getLine
--------------------------------------------- eta reduce λa
($?y. ($?z. done) putStrLn y) getLine

I replaced the >>= with (λa. λb. λc. a (λn. b n c)) for necessity. It has the following type:

((t2 -> t1) -> t) -> (t2 -> t3 -> t1) -> t3 -> t

For comparison this is the abstract type for >>= operator:

m a -> (a -> m b) -> m b

We can transform from the one to the another. This is obvious of course, but it's a good idea to check that it is:

((t2 -> t1) -> t) -> (t2 -> t3 -> t1) -> t3 -> t
-------------------------------------------------- rename t2 to a
((a -> t1) -> t) -> (a -> (t3 -> t1)) -> t3 -> t
-------------------------------------------------- reparenthesis.
((a -> t1) -> t) -> (a -> (t3 -> t1)) -> (t3 -> t)
-------------------------------------------------- rename t3 to c
((a -> t1) -> t) -> (a -> (c -> t1)) -> (c -> t)
-------------------------------------------------- rename t1 to e
((a -> e) -> t) -> (a -> (c -> e)) -> (c -> t)
-------------------------------------------------- replace c with b -> c
((a -> e) -> t) -> (a -> ((b -> c) -> e)) -> ((b -> c) -> t)
-------------------------------------------------- abstract out
m a -> (a -> m b) -> m b
where
m a = (a -> m x) -> y

Lets imagine our IO monad would be a data structure, defined in the following way:

data Action a result = Command (a -> result) Symbol [Anything]
type IO a = IO b -> Action a (IO b)

Now if you unroll the type declaration, you are getting this:

type IO a = IO b -> Action a (IO b)
type IO a = (IO c -> Action b (IO c)) -> Action a (IO c -> Action b (IO c))

This form has recursion everywhere over the place that's conveniently abstracted behind the monad.

Haskell comes with at least two reduction engines. The primary reduction engine takes care of the beta reduction. The secondary reduction engine takes care of the IO monad and steps the interaction forward.

Several variations of session haskell

There is a collection of implementations that lists several session type systems for Haskell.

I didn't try any of them because the ones sticking to Haskell seem to add their stuff on top of the IO monad, but the IO monad doesn't go away. The last one in the list seems to use Haskell as a meta-language to introduce linearly typed Haskell, then it proceeds to provide session types in that language.

Types and type systems are a reasoning tool. IO monad allows interaction with the environment by moving the interaction outside the reasoning tools, because the underlying logic does not have the means to deal with it.

So for example in a board game, the IO monad allows us to pretend that we can always do every possible move. We cannot therefore use the type system to reason whether the program is obeying the rules.

Session types allow such reasoning within a type system.

Correspondences with computability logic

Session typing is founding over linear logic.

The interaction expressions or "interaction combinators" I provided were inspired by computability logic. They resemble the "chall" and "chexist" -quantifiers.

I like computability logic because it gives clean semantics for the logical formulas and their operators whereas linear logic starts from the deduction rules and derives the logic from there. This means that I can understand the formulas presented in computability logic and every formula ends up having an interesting meaning even if it turned out to not fit into some particular proof system.

Computability logic treats logical formulas as games between a computer and its environment. True formula means that the computer won, false formula means that it lost.

The "True" -formula remains informationless in the sense that computer doesn't need to do anything as it has already won. The "False" -formula remains representing contradiction in the sense that computer cannot do anything as it has already lost. Likewise a proof for a formula presents as a strategy to win the game it represents.

Negation of a formula flips the roles of the players in the formula.

Classical logic formulas are upgraded into computability logic formulas by introducing the notion of choice as an operator. Conjunctive choice is a choice of the environment, whereas the disjunctive choice is a choice of the computer.

For example, the following formula with disjunctive choice in it:

a ⊔ b

Means that you've got a choice between continuing the game as a or b formulas. The formula represents what you know and remember about the game. The semantics recognize that you can also lose the game by not moving or doing a move that's not permitted in the current state.

It seems that computability logic merge the classical and intuitionistic logic by understanding the difference between occurrence of types and types themselves. The material for computability logic introduces cirquents. If you think of formulas as trees then the cirquents are directed acyclic graphs.

Computability logic introduces several other new operators. Though it uses same meaning for negation, quantification, conjunction and disjunction.

The existential quantifier for choice is derived like this:

∐ x. a  ≅  a[x:=0] ⊔ a[x:=1] ⊔ a[x:=2] ⊔ ...

I enjoy this because it even seems to introduce some more sense into System F and Intuitionistic type theory.

Overall computability logic seem to be a great grabbag of ideas that turn everything else to make great sense.

It's not there yet

The computability logic presents a tool to get interaction under the tools we use to formally reason about computing. It relies on ideas that are quite distant from the current foundations of Prolog and Haskell. Everything has not been solved though, and it'd seem there's something even better left to discover.

Despite what is remaining to be found, all of these things presented here has an immense value. Complex interactions are ubiquitous in computing.

It looks like I will have to study bit of proof theory and deep inference. There's a paper "Deep Inference and Symmetry in Classical Proofs" that points out how propositional and predicate logic can be condensed into a very small set of deep inference rules that are neatly symmetric.

We are still studying this subject at r/proglangdesign. I have to thank everyone there for making sense of this and finding out about great papers, lets keep going on.

Similar posts