# Bad things I have to say about ML/Haskell

I dunno exactly what I should call this post today.

There's a pretense that everything is a total function or can be represented as such. There's a hype toward dependent typing because it lets you represent more functions as total.

`Prelude> let greet 1 = "hello"`

`Prelude> greet 1`

`"hello"`

`Prelude> greet 2`

`"*** Exception: <interactive>:2:5-21: Non-exhaustive patterns in function greet`

`Prelude> :t greet`

`greet :: (Eq a, Num a) => a -> [Char]`

Above we have defined a partial function into a `ghci`

prompt
and we ask what is its type.

We get a type `a -> [Char]`

.
It's not telling me that the function is partial.

If we think about this hard, you will notice that we could actually present this program as a partial map and a total action after that.

`greet_partial :: (Eq a, Num a) => a -> Maybe ()`

`greet_total :: () -> [Char]`

This way we have narrowed the input into choices.
Also the `greet_partial`

is now total as well,
because we include the possibility of a failure in the result.
Haskell could do the same but it doesn't.
Maybe because the memory allocation might also fail when
the system runs out of the memory.
Every function would have to be suffixed with `Maybe`

if we go this route.

In practice we have a lot of data structures that cannot represent every constraint in their types. Therefore we always have this kind of type-disrepancy. It is not practical to expect that we would catch every case before the program has run because in many cases the information to do the decision simply doesn't exist compile-time. It goes far enough that in some cases it's okay that data gets lost and corrupted if we know that it happened afterwards.

We need proper ways to deal with failure and not pretend that everything is a total function. I think Prolog does it better in this aspect but it's not perfect either, perhaps these problems have not been solved?

Types correspond to propositions and programs correspond to proofs. In logic the propositions and proofs share a language that can mix. Haskell splits them into three sub-languages that cannot mix well:

- Type declarations have assertive value.
- Programs are evaluable thunks.
- Case patterns form a pattern-matching language.

Multiple such sub-languages strung together create friction and do not make programming easier or nicer. To be fair, it must be said these are not ad-hoc features. They derive from simply typed lambda calculus.

Haskell sells the strong normalization property of simply typed lambda calculus. It buys expressiveness in return. It's trading integrity of the language for things that do not matter.

If Haskell did not do this bad trade, the Lazy/Eager evaluation question would be a red herring. If Haskell had been designed right it would terminate with any evaluation strategy.

Here's a small function that never terminates, and a valid Haskell type annotation for it:

`f x = if f (x + 1) >= 0 then x else 0`

`f :: (Num a, Ord a) => a -> a`

Recursion botch up soundness of the type system. This function cannot return anything because it does not return, yet it still claims that it returns a similar value to what is passed into it.

At this point any rigour the language had is gone, because the infinite and finite data structures are not interchangeable. They should not be the same structures.

This is bad because this kind of termination proofs could be useful. Especially when it'd seem that a large group of termination proofs can be divided into either inductive or coinductive proofs. Having this distinction in types would help ensuring termination in software that depends on infinite data structures.

In higher-order logic the whole point of type systems is to block paradoxes that would allow anything to be proven from no assumptions. This correlates with termination proofs and it is why types systems are useful.

Traditionally type systems have been used to stratify type checking to the compile time or introduce it into the language in the first place. But primary purpose in types isn't optimization or selection of a memory layout for a value. The point in type systems is the parts that Haskell defenestrate for practical reasons. It would be okay if as the result we wouldn't believe that Haskell should be held to some higher standards than other languages.

Curry-Howard correspondence doesn't favor the lambda-calculus or category-theoretic side, in fact it favors the intuitionistic logic. Why should we come up with all sort of contortionistic alternative views if the original is just as good?

We should neither deny Haskell's problems with with side effects and mutable structures. Haskell is a low-level language because it can be remotely thought of as encoding of proofs in lambda calculus, but it's not low level enough.

Maybe the key observation would be that we will always have partial functions. A type system should not try to erase those. A good type system would prevent situations that we do not want. With this viewpoint we could perhaps remove a possibility to faulty heap management from C language and get a typed low-level language that is worthy of use.