# Geometry of interaction and optimal reduction

"The geometry of optimal lambda reduction" presents a nice view into Lamping's algorithm that implements optimal reduction.

"The geometry of implementation" shows a way to interpret interaction nets as program flow graphs.

The papers described yield relatively interesting machinery that is simple to explain.

## Bus operations

The first element in the system is the bus. It's an interface between the ports.

``````------------------------------------------
``````-------------)              (----|--------
``````-------------|-------------------|
````             |----------------------------````

The bus has two ends and it connects wires in orderly manner. I use a symmetric presentation where a half bus is presented with a list.

Within that list,

• '0' represents a terminal point,
• '1' represents a wire through,
• '2' represents a join of wires.

Buses can be catenated and intersected. On intersection terminal points and joins eliminate. Catenation just joins two buses.

The buses form "croissants" and "brackets" mentioned in the paper.

## Node operations

Nodes are ports chained together into "fans". Node may have one principal port that is distinguished with a depth mark. When two principal nodes connect, they interact.

``````--[]---\        /---[]--
``````        ---[]---
````--[]---/        \---[]--````

First during interaction the bus is flushed out from the between. They end up interacting with the ends.

``````--[][--\        /--][]--
``````        --------
````--[][--/        \--][]--````

We end up with a graph where there's nothing between the fans.

``````--[]---\        /---[]--
``````        --------
````--[]---/        \---[]--````

If two nodes are marked to the same wire, the node is annihilated.

``````---[][]---
``````
````---[][]---````

Otherwise the nodes slide past each other and "commute".

``````       /-------\
``````--[]---         ---[]--
``````       \--\ /--/
``````           X
``````       /--/ \--\
``````--[]---         ---[]--
````       \-------/````

Only the fans forming principal port pairs progress the computation, so croissants and braces in buses are there solely for keeping track of how fans should connect.

## Token passing interpretation

There is also a way to "evaluate" the graph without rewriting it. The second way consists of passing a token through the network. Token has as many stacks as there are lines in the buses. When the bus changes the stacks are either eliminated, introduced, folded or unfolded. If the token passes through a principal port, it selects an ancillary port by popping a record from some of the stacks. Or if it passes through ancillary port, it inserts a record into a stack.

The idea is that once these kind of trace tokens exit the graph, they contain the same information as what the normalized version of the graph would contain.

## Translation of lambda calculus

Asperti's "The optimal implementation of functional programming languages" contains a nice explanation for how lambda calculus translates into the above interaction nets.

Although graph reducers are simple to explain, the implementation can be quite difficult and bug-prone.

## The use-cases

Although optimal reduction is not optimal in practice for reducing lambda calculus, it has other uses that I'm interested about.

The main thing I'm interested about is the interaction -part in interaction nets. Another part is that I like that the whole thing resembles logic programming in that connecting things up resemble forming constraints.

## "Eta-conversion" issue

There's one problem that I'm quite not sure about. We may get issues that are similar to what eta reduction is solving:

``λy.λx.y x``

And I can give you this kind of an alternative presentation that might correspond better with interaction nets:

``((x,r),(x,r))``

If we'd like to know whether this is equivalent to a similar but simpler program:

``(y,y)``

It may be possible to do this by comparing the two with the use of token passing:

``````L↑(x,r) + R↑(x,r)
````= L↑y + R↑y````

We go to left and right branch simultaneously and see what is in each. At this point, are we stuck? Nope, because we can decompose 'y'.

``````LL↑x + LR↑r + RL↑x + RR↑r
````= LL↑y + LR↑y + RL↑y + RR↑y````

Now we can eliminate the variables:

``````LLLR + LRRR + RLLL + RRRL
````= LLLR + LRRR + RLLL + RRRL````

Since these programs would have the exact same behavior, we can conclude they are equivalent.