How many languages am I going to write?
You've seen me open up several more repositories this year.
- suoml - a small subtyping type inferencer prototype, based on Stephen Dolan's thesis.
- suchlog - a Prolog spinoff.
- lpirk - horn-clause based IR.
It mentions yet another language
coyote
.
The reason for the most of this is that I took a horn clause to the knee. Now programming with Python and Lever feels like limping and I have had to change my plans.
When I wrote the horn-clause based IR for Lever, it started catching variable-flow bugs for me. In addition the IR dump proposed it could be easier to read than what the original language was. The IR is also very simple and it'd be simpler without my input language's antics. I concluded that the source language gets something wrong.
Logic programming implementations and type inferencers
have a lot in common with constraint solvers.
I did the suchlog
to explore these ideas on the implementation level.
lpirk
started out after I realised that delimited continuations
could dramatically simplify the Lever language implementation.
All of the more complex control flow constructs could be collapsible
into much simpler set of constructs.
I also realised it could be fun to help out a friend with his language project,
so instead of writing a frontend strictly for my language,
I decided to allow multiple frontends and
make the implementation from interchangeable elements that can be switched
to get slightly different programming languages from a single system.
lpirk
's implementation is going to be different from Lever implementation
in the sense that it is intended to be type-inferenced and type-checked.
I'm planning to make structural typing and separate the types and memory
layout apart such that memory representation is no longer
determined strictly by types.
The reason for this is that programs set out as a linear sequence of actions
seem more like an assembly code now.
It is no longer something you would be expected to write by hand.
But there is also the
curry-howard correspondence
and the desire to study this one.
The isomorphism between intuitionistic logic proofs and lambda calculus suggests that computer programs are proofs and types are propositions. To someone who didn't throroughly understand what a formal proof is in logic this may sound difficult to grasp. It turns out to be an incredibly useful property in the practice. To understand it through the deck you have to first understand it in a simple context. Therefore lets examine some trivial proporties of list data structures and natural numbers through logic.
Peano arithmetic is formed by axioms that we have a zero and that every number has a successor. We can represent these axioms like this:
nat x ├ x
───── ───────────────
nat 0 nat x ├ succ(x)
For lists we can take the axioms that we have an empty list and a way to produce a list by inserting an item into it.
list x ├ x some a ├ a
────────── ───────────────────────
list empty list x ├ cons(a, x)
Does every list have a length? It is so trivial question that you already know the answer. It produces this kind of a long proof:
list empty list x, nat y ├ length(x, y)
───────────────────────────── ───────────────────────────────────────────
list empty ├ length(empty, 0) list x, nat y ├ length(cons(_, x), succ(y))
────────────────────────────────────────────────────────────────────────────
list x, nat n ├ length(x, n)
This proof can be encoded into the program:
length :: List -> Nat
length = λa.case a of
empty -> 0
cons(_, x) -> succ(length(x))
The way this proof is constructed will likely not satisfy everyone. It may not be closed as well as people would hope. The paper "A taste of linear logic" has a nice explanation of this.
How about the types as propositions? We can examine it through some simpler example:
water, powder, powder ∧ water ⊃ coffee
──────────────────────────────────────
coffee
This proposes that if you have a supply of water and coffee powder,
and you have a method to create coffee from water and powder,
then you can produce coffee.
People doing linear logic may have only one filter in their coffee machine,
so the powder ∧ water ⊃ coffee
can be consumed in the process and then you
get coffee only once.
When the types are used in many enough ways,
it doesn't really make sense to always give them a predefined memory-representation
that is considered unchangeable.
These ideas have been known for much longer than what they've been used for programming.
If programming is hand-writing of complex proofs, then would it be unsurprising that people do so many mistakes during programming? Shouldn't the computer generate the proofs?
And if you feel that I didn't answer the question in the beginning. How many languages am I going to write? Maybe hundreds of them or then just a few. I'm just explaining why it's happening at this time though.