Grabbag of Idris tricks

The necessity to work with set structures is pushing me back.

I read Denis Firsov's paper "Dependently Typed Programming with Finite Sets". Most of the tricks here are from there. Not all though.

Injectivity and Surjectivity

Knowing whether your function is injective or surjective can be very useful. They describe potential relations between function's codomain and domain.

Injectivity means that we always get a different value depending on what we put in.

Injective : {t,u:Type} -> (t -> u) -> Type
Injective f {t} = (x,y:t) -> (f x = f y) -> (x = y)

Surjectivity means that the function completely covers its domain. For every possible value you can construct from the domain, there's some value that can be given to the function that constructs it.

Surjective : {t,u:Type} -> (t -> u) -> Type
Surjective f {t} {u} = (y:u) -> (x:t ** f x = y)

If some function is both surjective and injective, then you get a free inverse function that you can extract from the surjectivity proof.

Note that if it feels inconvenient to produce dependent pairs, you can also split the sigma-producing function into two. To make the inverse function explicitly visible, surjectivity can be also represented with:

f : a -> b
g : b -> a
f_inj : (x,y:a) -> f x = f y -> x = y
f_surj : (y:b) -> f (g y) = y

And if you have both, you can compute the injectivity/surjectivity for the inverse as well:

g_inj : (x,y:b) -> g x = g y -> x = y
g_inj a b prf = rewrite sym (f_surj a)  -- a     = b
             in rewrite sym (f_surj b)  -- a     = f g b
             in cong prf                -- f g a = f g b

g_surj : (x:a) -> g (f x) = x
g_surj x = f_inj (g (f x)) x            -- g (f x) = x
                 (f_surj (f x))         -- f (g (f x)) = f x

Free stuff! Just drops to your hand!


Contraposition applies to any function and states that every function has a contrapositive version:

contrapositive : (a -> b) -> Not b -> Not a
contrapositive f nb a = nb (f a)

Contraposition and function properties are favorable things to explore as they let you reveal how things relate.

For example say we had relations in place to state what is a crazy house and another that states there's a toilet in kitchen of a house, we could relate them with a function:

has_toilet_in_kitchen(house) -> crazy(house)

If we have converse, then we have a function like this:

crazy(house) -> has_toilet_in_kitchen(house)

Contrasposition for the first function would be:

¬crazy(house) -> ¬has_toilet_in_kitchen(house)

And for the second:

¬has_toilet_in_kitchen(house) -> ¬crazy(house)

The inverse would mean it's the standard equipment required in every crazy house!

Expression of not having duplicates

If you wanted to state that there are no two proofs for same thing, you could do it by stating that the type is propositional:

Prop : Type -> Type
Prop t = (p1:t) -> (p2:t) -> p1 = p2

The type means that if you take any two inhabitants of same type, then they always turn out same. For example, you already know at least one such type:

unit_is_proposition : Prop ()
unit_is_proposition () () = Refl

You can use it about everywhere, for example the following would mean exactly what you think it does:

NoDup : {t:Type} -> List t -> Type
NoDup {t} xs = (x:t) -> Prop (x ∈ xs)

This NoDup xs would mean there are no duplicates in a list.

If we construct a set out of a list, usually we would like to make it contain no duplicates. It allows counting how many items there is in a set. Therefore this kind of a property is kind of useful.

There's an another way to state that a list contains no duplicates:

indexer : (xs:List t) -> Index xs -> t
Injective (indexer xs)

Injective functions that have an enumerable codomain are fine presentations for finite sets.


It's a bit silly to note it at this point, but it appears the paper presents a neat way to name things. Say you have some function 'f', which is supposed to give you 'b'.

You may then have more functions that prove properties of this function:


Though if it doesn't quite match what you're doing, don't use this terminology before you look up what it means. I've understood these have precise meaning and I wouldn't want to dilute it.

Subset relation

One way to represent subset relation would be through having two functions.

complete: (x:t) -> p x -> (x `In` xs)
sound:    (x:t) -> (x `In` xs) -> p x

The first function proves that all items of type 't', with property 'p' are in the list xs.

The second function proves that every item that appears in the list has the property 'p', meaning nobody has added anything exciting into it while we were looking elsewhere.

Replace p x with x In ys and you have a list with items a subset of some other list ys.

There is a slightly more convenient way to describe a subset relation. We could simply state that items in list appear in what it is a subset of:

(x `In` xs) -> (x `In` ys)

Then if we want that the items in list 'xs' has some properties, we can separately describe them as:

(x `In` xs) -> p x

We could state that bikes are vehicles:

vehicle `In` bikes -> vehicle `In` vehicles

And then state that such bikes have two wheels:

vehicle `In` bikes -> has_two_wheels(vehicle)

There's one problem though, we would still need:

vehicle `In` vehicles -> has_two_wheels(vehicle) -> vehicle `In` bikes

Otherwise all our bikes wouldn't be in the bikes -list.

Uninhabitable structures and contradictions

Idris has a nice system for giving common interfaces for some things. It works with uninhabited structures quite well:

interface Uninhabited t where
  total uninhabited : t -> Void

absurd : Uninhabited t => t -> a

It's convenient to state structures that you know to be uninhabited, for example empty lists cannot contain:

Uninhabited (In x []) where
  uninhabited Here impossible
  uninhabited (There _) impossible

And digits little larger than the digit itself are impossible. The presentation is quite surprising because Idris is able to conclude the base case is impossible and only presents the induction case to plug:

Uninhabited (LTE (S n) n) where
  uninhabited (LTESucc n) = uninhabited n

Quite often the interface system feels like inappropriate in Idris, but here it works fine.

Auto-checking of decidable properties in Idris

I'm a bit surprised that Idris already doesn't have anything that resembles an auto-checking routine, but here goes instructions how to make them.

You need to be able to define squished version of a property:

squish : (Dec p) -> Type
squish (Yes d) = ()
squish (No _) = Void

I call it 'squish' instead of 'quot' because it seems this actually messes things up quite badly. In the paper it was used for forcing proofs to be equivalent with each other. Whenever I tried to use this in Idris for that purpose it was quite messy and I had lot of difficulty with working on anything that contained squished types.

They don't seem nice because the expression that was "decided" ends up hanging into the type declaration and expands in the input: eg. squish (case (decideThisThat x) etc...).

But with these we can describe an auto-checker:

auto_check : (d:Dec p) -> {auto q:(squish d)} -> p
auto_check (Yes prf) {q = ()} = prf
auto_check (No contra) {q = q} = absurd q

Now we can give a function to check properties from constants that we give.

some_number : Nat
some_number = 59

some_number_is_large : LTE 50 Quot.some_number
some_number_is_large = auto_check $ isLTE 50 some_number

It's also possible to check properties in order to include the object:

larger_than : (n : Nat) -> (m : Nat)
  -> {default (auto_check$isLTE n m) lte:LTE n m}
  -> (k:Nat ** LTE n k)
larger_than n m {lte} = (m ** lte)

large_num : (n:Nat ** LTE 10 n)
large_num = larger_than 10 2500

Though this should be reserved for when Idris itself cannot do the checking through the 'auto' -keyword, usually when it's hard:

b_larger_than : (n : Nat) -> (m : Nat)
  -> {auto lte:LTE n m}
  -> (k:Nat ** LTE n k)
b_larger_than n m {lte} = (m ** lte)

some_large_num : (n:Nat ** LTE 12 n)
some_large_num = b_larger_than 12 204

There are also some tools to extend the behavior of Idris itself. I haven't explored into them much because I haven't found explanation of them that I would understand well.

It appears to not do the computation under ideal conditions, so use this sparingly.

Extra: A little proof in cubicaltt

I've been excited about cubical type theory, enough to read through papers and ask questions about it.

Cubical type theory treats equality proofs as if they were geometric shapes accessible with boolean barycentric coordinates. If you have a (p:Path A x y), you can get the x with p 0, and y with p 1.

With idris you can already prove, for example, that every contradiction is same:

lemma_prop_neg : (f,g:Not p) -> (x:p) -> (g x = f x)
lemma_prop_neg f g x = absurd (f x)

In cubical type theory you can bring the equivalence further and conclude g = f. This is how you can do it in cubicaltt:

module experiment1 where

data N0 =

efq (A : U) : N0 -> A = split {}
neg (A : U) : U = A -> N0

Path (A : U) (a b : A) : U = PathP (<_> A) a b
refl (A : U) (a : A) : Path A a a = <i> a
sym (A : U) (a b : A) (p : Path A a b) : Path A b a = <i> p @ -i

funExt (A B : U) (f g : A -> B)
       (p : (x : A) -> Path B (f x) (g x)) :
       Path (A -> B) f g = <i> \(a : A) -> (p a) @ i

lemma_prop_neg (A : U) (p q : neg A) (x : A) : (Path N0 (p x) (q x)) =
    efq (Path N0 (p x) (q x)) (p x)

prop_neg (A : U) (p q : neg A) : (Path (A -> N0) p q) =
    funExt A N0 p q (lemma_prop_neg A p q)

The funExt is doing the trick and it's from "Hands-on-introduction to cubicaltt".

For a beginner, it's exciting to prove that two different proofs of "There are no matching pair of socks in my sock box" are interchangeable. But on the retrospect being able to prove the function extensionality can be much more interesting.