Horn Clause -based VM and Frontend for Lever 0.10.0
The paper by Jorge Navas, "Horn-Clauses as an Intermediate Representation for Program Analysis", referred to in a previous post, turned out to be a gold nugget.
I adjusted Navas design for a virtual machine and wrote a bytecode compiler for it. The results are great so far. Here's some actual output from the frontend:
proc1(v4; v5) <-
global("=="; v6), call(v6, v4, 1; v7), is_true(v7;)? proc2(;v5).
global("=="; v6), call(v6, v4, 1; v7), ¬is_true(v7;)? global("=="; v8), call(v8, v4, 2; v9), is_true(v9;)? proc2(;v5).
global("=="; v6), call(v6, v4, 1; v7), ¬is_true(v7;)? global("=="; v8), call(v8, v4, 2; v9), ¬is_true(v9;)? global("+"; v10), global("fibonacci"; v11), global("-"; v12), call(v12, v4, 2; v13), call(v11, v13; v14), global("fibonacci"; v15), global("-"; v16), call(v16, v4, 1; v17), call(v15, v17; v18), call(v10, v14, v18; v5).
This is code for the fibonacci function. It is quite long because it needs to fetch several global variables. Here's the same cleaned out a bit and transformed into a form equivalent to what Navas provide:
fibonacci(v4; v5) <-
v4 == 1, v5 = 1.
v4 != 1, v4 == 2, v5 = 1.
v4 != 1, v4 != 2, fibonacci(v4-2; v14), fibonacci(v4-1; v18), v5 = v14 + v18.
Every clause is a possible scenario that the program can take.
The virtual machine runs off from a list of procedures. Each procedure has inputs and outputs and they can be called by other procedures in the same program. Loops are implemented by tail calls, therefore tail calls have to be optimized here.
Most instructions have a list of inputs and outputs. The clauses are encoded by instructions that jump forward in the procedure when they are triggered. In the location of every jump there is a terminator instruction that may give some additional parameters for the guard.
The iterators, pattern matching, control flow and exceptions are implemented by these guards.
For example, the fibonacci above would be encoded like this:
inputs=[v4], outputs=[v5] tmp=[v14, v18]
o_is_true(v4 == 1) output= guard=L2
v5 = 1
guard L2 output=
o_is_true(v4 == 2) output= guard=L3
v5 = 1
guard L3 output=
fibonacci(v4 - 2) output=[v14]
fibonacci(v4 - 1) output=[v18]
v5 = v14 + v18
guard (end of procedure)
If there's a
raise -instruction, it is right before
The IR clauses are required to be in SSA-form before translation to VM bytecode. The frontend builds the bytecode in reverse simultaneously using linear register allocation scheme to fit the variable flow into the VM and calculating the jumps that guards have to do.
[byte_count, col0, lno0, col1, lno1, sources_index]
The encoding to VM code is entirely decodeable back to the IR that produced it.
Variable reflow in the IR
The whole translation unit contains every function and their closures. The program that produces the IR can annotate the code with flow-variables instead of variables that satisfy the constraints required by the IR. The flow variables are then converted to constrained variable flow.
The flow-variable conversion takes place in two passes. The first pass computes define-sets for each procedure by flowing the defined variables forwards until a fixed point is reached and none of the define-sets change any longer.
The second pass starts inserting variables to procedure based on where they are originating from. This pass also works along the closure frame boundaries and therefore solves the lexical scope variable flow as well.
At this point the flow variables are written away from the program. Then redundant move instructions and redundant procedures are removed.
Desugaring and control flow construction
Since we do not need to worry about the variable scope, and for control flow we basically piece together procedure blocks, it could be theoretically possible to directly encode the program into the necessary form. Though I prefer doing an intermediary step and first produce these kind of constructs from the parse tree. It allows me to desugar some of those constructs.
BranchExcept(statements, exception_var, handler_statements)
RelationalCall(callee, input_exprs, output_slots)
InplaceAssign(slot, op_expr, value_expr)
Cond(cond, statements, statements)
ForHeader(slot, rest, value, cond)
Match(value, pattern_expr, match_slots)
Function(slots, default_exprs, return_flowvars, statements)
Let([(flow-variable, expr)], expr)
match_slot, slot, expr:
match_slot, slot, expr, cond:
These combinators cover the large parts of the language, although I may add some smaller combinators in the mix.
If the branchexcept and return/continue/break are used in the same mix, the branchexcept has to flag every control flow path and "lift" the flags back into control flow outside it. This operation creates additional flow variables.
Scope does not need to be particularly handled. The
is retrieving flow variables from a simple table. If the
table doesn't have a flow variable for the symbol, then it
creates one if the symbol is allowed to define a variable.
Otherwise it generates a fetch to the global variables of the
To allow recursive definitions, the pattern
Assign(Symbol(), Function|Generator) must be specially
treated. This ensures the closure frames are merged into a
recursive shape if that is necessary.
The continuations and the context are two main elements in building the IR from the combinators. The context takes care of the scope and the state. The continuations point out places where you can insert instructions. Jump to a continuation generates a procedure, if the continuation was not pointing to a procedure.
The combinators roughly form classes by what is required from them.
- The statements take in a continuation and return a continuation.
- Loop headers take in entry continuation, leave continuation, they return an inner loop continuation.
- The return, break, continue capture the continuation, add them into the context stack. They return a fake non-generating continuation back.
- BranchExcept behaves in an ordinary mannel, but it needs to generate flags and portals if there are items added into the context stack by the try block, in order to route those through itself.
- Conditionals take in a continuation and produce one success and failure continuation. If they are used as expressions, then the failure is directed to raising a type error.
- Expressions take in a continuation, and produce a continuation plus a flow variable.
- Slots take in a continuation, return a continuation
and a slot object. The slot object has three functions.
read(cont) -> val, cont
write(cont, val) -> cont
match(cont, val) -> cont, cont
- If the slot is a match slot, it will define 'match' and derive 'write'. Otherwise it will do that otherwise around.
Since the tuple is a slot, there are some cases where it can be multiplexed or hidden, such that no tuple is generated. Furthermore, a function returning only obvious tuples returns multiple variables instead of tuples. And it is checked that when a function returns tuples, it returns them in only one size.
Furthermore I am so glad that I learned about Navas' work. There are still probably tons of things I could learn from him.