Game semantics & cartesian closed categories
Last blog post fell short by not yielding a correct normalization algorithm. Although we modelled lot of interesting things with natural transformations. Enough to form a basis on a programming language experiment. So I explored papers about term rewriting, normalization and normal forms.
Optimal reduction and affine linear logic seem to expose the challenges well. You either share computation or lose on bookkeeping, or then copy terms with computation and have less bookkeeping. Though there are also cases where bookkeeping isn't needed at all and some of that is captured by affine linear logic. That seem to correspond on how my attempt at constructing a normalization algorithm failed.
Though it seems it's time to dust off game semantics. I got motivated by a paper "Normalisation by Traversals". This seem to be related to "normalization by evaluation" -schemes.
Lets start by describing game semantics for natural transformations and functors in cartesian closed categories. Functors first as they correspond to types and form arenas.
1 a*b a^b
These have the following semantics as arenas:
1
: Trivial victory for the player.a*b
: The player must present a choice between gamea
andb
. The play in eithera
orb
is justified after the opponent has chosen.a^b
: The player must present a gamea
as "function" but can trigger a gameb
as the opponent. A choice of the gamea
justifies the gameb
.
Here the types form a gameplay between two players. The idea of justification is important to describe arenas. When a player takes a move he must justify it.
Recall the natural transformations were.
const a : a → 1
pair a : a → a*a
cur a b : a → (a*b)^b
ap a b : (a^b)*b → a
fst a b : a*b → a
snd a b : a*b → b
a*b : a*b → a*b
a^b : a^b → a^b
Natural transformations form valid strategies to always win a game.
const a
player has won.pair a
ignore the choice and play gamea
anyway.cur a b
present choice between games a and b. On the first choice playa
. On the second choice playb
but trigger a game against the opponent.ap a b
choose the both games presented. playa
, then play againsta^b
. prepare to playb
and glue it to the second game chosen.fst a b
choose the first game and playa
.snd a b
choose the second game and playb
.a*b
present and get a choice game, choose accordingly and play the according game on choice.a^b
present and get a function game. play asa
on request to play. play asb
on response to play.
Composition composes two strategies. Choices in one strategy drive the choices for the another strategy.
An interesting detail: ap
presents a duplication of a game.
Duplication may manifest at pair
but it's brought in by ap
.
When we compose two strategies, half of them become internal to the overall strategy taken. Equal strategies are externally equal. Therefore normal forms arise as minimal representations of a strategy.
Just like before, we won't get anything particularly "new" here. The resulting language is a variation of the lambda calculus. Valid strategies are formed from justified motions and responses. Minimal descriptions likely form some kind of tree structures.
Normal forms set up around the descriptions of arenas, and every strategy conveys the shape of the arenas given to it. Composition allows to merge strategies.
Best approach may be to count the strategies available. After all the available strategies orient around justifications of play. We are going to get 6 classes to construct strategies because we got 3 ways to compose arenas and may end up playing each on either side.
(1,a): const
(a,1): Ø
(a*b,c): opt (a,c) (b,c)
(c,a*b): fst (c,a)
(c,a*b): snd (c,b)
(a,_): var n (a,b)
(a^b,c): cur (a,c) [b]
(c,a^b): ap (c,a) (b,1)
(c,a): dup (c,a) [a]
Now when considering justifications and arenas.
It appears to be that the const
, cur
, opt
are justified before other operations because the motion is not justified
before environment takes an action.
After that retrieving a variable is justified, because it destroys the current element on the antecedent anyway. Alternatively duplication is justified, but only once. That essentially works like a backtracking point, but backtracking is not affected by operations on the left side.
Operations fst
, snd
, ap
are justified last.
Additionally we have id
operation that just passes a variable around.
They seem to form normal/neutral forms separated by variables and duplication.
data Norm = Const
| Opt Norm Norm
| Lam Norm
| Var Integer Neut
| Dup Neut
deriving (Show, Eq)
data Neut = Ap Neut Norm
| Fst Neut
| Snd Neut
| Ident
deriving (Show, Eq)
This is obtained by examining natural transformations as justified strategies. It remotely looks like lambda calculus.
The composition is difficult to implement because you have to verify the scope won't be scrambled. But lets see if we can describe the rules in Haskell first.
When combining normal forms together,
comp :: Norm -> Norm -> Norm
comp Const _ = Const
comp (Opt f g) h = Opt (comp f h) (comp g h)
comp (Lam f) h = Lam (comp f (carve h))
comp (Var i x) _ = Var i x
comp (Dup x) h = ??
The duplication is very difficult to describe so lets examine it separately. Note that when scope is crossed on "lambda" and duplication, we need to carve a hole for a variable so that scopes won't get tangled.
comp' :: Neut -> Norm -> Neut/Norm
comp' (Ap x f) (Lam g) = comp' x (subs g 0 f)
comp' (Fst x) (Opt f _) = comp' x f
comp' (Snd x) (Opt _ g) = comp' x g
comp' x (Var n y) = Var n (comp'' x y)
comp' x (Dup y) = Dup (comp'' (carve x) y)
comp'' :: Neut -> Neut -> Neut
comp'' a (Ap y g) = Ap (comp'' a y) g
comp'' a (Fst x) = Fst (comp'' a x)
comp'' a (Snd y) = Snd (comp'' a y)
comp'' a Ident = a
In conclusion, neutral forms and normal forms stack together while neutral and normal forms eliminate each other.
Now we only need to describe the duplication rule that I left out.
comp (Dup x) h = ??
Obviously 'h' is going to be substituted into 'x'. Perhaps we'll do:
comp (Dup x) h = let h' = carve h
y = subs' (carve x) 1 (Var 0 h')
in Dup (comp' y h')
It's horribly convoluted, the duplication in 'x' is removed but we have to immediately carve a hole for another duplication.
Also it reveals something I didn't even discuss yet. 'Dup' and 'Var' may drop high into a stack of neutral forms. When that happens, 'Var' destroys the stack of neutral forms.
But how about when 'Var' drops on 'Dup'? Simple example would be:
Lam (Dup (Var 1 (Ap y (Var 0))))
That seems like there'd be a cleaner representation for this expression. It's not quite right yet.
Maybe the issue with duplication is that the actual duplication should be closer to exponentials in linear logic:
dup : !a → !a ⊗ !a
This means that ap
and cur
,
they should also treated like they were (a^b ⊗ b) → a
and a → (a ⊗ b)^b
during normalization.
Alternatively something in this model is just plain wrong.
Conclusion
It's neat how the product and terminal type end up aligning in semantics with how they'd be defined for monoidal closed categories.
Lambda calculus is an internal language for cartesian closed categories. You could always readback/writeback into that to obtain normal forms for cartesian closed categories.
I'd have liked to find new structures that convey the normal forms of cartesian closed categories somehow.
Maybe I still have to read papers about the categorical abstract machine and see if there's anything I missed.