Sketch of a language

I have thought about writing a typed language for some while.

Dependent typing

A commonly accepted way to introduce dependent typing is by passing a function's variable into its type.

A function could be written like λx. value, where x is a variable and gets bound in the body. Likewise we have this extending over the type.

To illustrate how this works I better display how the call is evaluated.

(λx. value : ∏x:a. type) argument

The function type also presents the type of the argument passed in. The argument is substituted into value and type. If we represent substitution with brackets, we are going to get:

value {x:=(argument:a)} : type {x:=(argument:a)}

My problem with this approach is that it doesn't translate well for linear typing.

There's a type indexing scheme that's a variation of dependent typing. The idea is that we create types that mirror values into the type space. Then we can provide some type variables with quantifiers so we don't need to lug around the mirrored values. For example, with natural numbers we could have:

data nat¹ : nat → type where
    zero¹ : nat¹ zero
    succ¹ : nat¹ n → nat¹ (succ n)

Now we have a way to get dependent typing without duplicating values. Here's how you could ensure that predecessor is only taken from a value that has a predecessor.

predecessor : ∀x. nat¹ x → is_succ n → ∃y. nat¹ y
predecessor (succ x) refl = x

Now if we were to call this:

predecessor 3 3_is_succ

We would get to infer that 3 is nat¹ n, but there's only nat¹ like that so x = 2. And therefore we also have a simple proof for the statement.

If we abstract this out, the mirroring doesn't need to be exposed. This means that we get to represent dependent types the way they were already represented, though we have a different viewpoint to it that also relaxes the notation.

predecessor : ∀x. is_succ n → (x : nat) → nat

The type checking happens through constraint solving. Basically it could be done by augmenting bidirectional type checking with logic programming schemes.

Normal form expressions:

Expr+ → Expr+           constructs a function type
x ↦ Expr+               constructs a function
∀x. Expr+               constructs a quantifier
Var : Expr+             constructs a type mirror
case x block            constructs a case elimination
                          it's a normal form because types flow into block.
Enum [...]              constructs a tagged value.

Neutral form expressions:

Expr Expr+              function call, eliminates function
Variable                variable from the environment
EnumType [...]          enumeration type, we know what it is.

Normal-form subexpressions have been marked with +-suffix.

When a value with a quantifier is typechecked, the expression is constrained in the scope.

When type-infering provides a quantified expression, it adds a free variable into the scope.

The mirrors introduce constraints whenever they appear as types, and mirrored type may differ from the actual one, also possibly it may have to differ.

Bidirectional typechecking cues how the variables and structures in the language should behave. If we reach a variable in normal-form we can likely infer what that variable means from the context. Otherwise we have to rely on the scope.

Type declarations as structures

I think I'd like type declarations as expressions. I would likely do it this way:

maybe : type → type
maybe a = {nothing | just a}

If structures match, they're the same type.

Structured recursion

The language would not come with recursion. Instead structured recursion schemes would be used.

nat : type
nat = inductive (nat ↦ {zero | succ nat})

list : type → type
list t = inductive (list ↦ {empty | cons t list})

inductive would require that the given type is strictly positive, though it would get the proof through an implicit variable. Types could declare patterns to fill themselves up implicitly.

Abstract datatypes based on signing

Eventually you'd like something to be declared as its own type. This would be done through signing.

abstract door : {locked|unlocked|open} → type
  lock : door unlocked → door locked
  unlock : door locked → door unlocked
  open : door unlocked → door open
  close : door open → door unlocked

door = state ↦ unit
lock   nil = nil
unlock nil = nil
open   nil = nil
close  nil = nil

It would seem to fit well with the rest at the moment.

Tagless-final encoding & types

At first it'd seem like type declarations could not be deconstructed within the language once they are constructed. The problem is that 'type' cannot be a closed type. Well there's a trick around that seems to manifest in tagless-final style.

So you have some type constructor, and want to construct things out of it.

tcon : type → type

Can't deconstruct a function? Wrong! You can deconstruct this by calling the function. But can't deconstruct a type because you don't have an eliminator.

Well, how about:

tcon : type → type
sp : strictly_positive tcon

Now this is a bit different because we know what strictly positive is, and the proof describes how tcon must be structured. Given that the strict positivity is only what you need from the type, then you can use the proof to deconstruct type deep enough to provide the function you intended to provide.

Ahmad Salim Al-Sibahi in his thesis, "The Practical Guide to Levitation", attempts to provide a practical implementation of described types in Idris. If you've tried to implement decidable equality for a datatype in Idris you probably appreciate that Thesis.

Similar posts