This is not a monad tutorial

Occassionally when discussing functional programming on reddit, I find entertainment.

Most of these quotes are from a recent reddit discussion.

Monads as a mathematical circlejerk

Somebody confesses they don't know what a Monad is claiming that it has not been explained well-enough.

I still have not heard a single coherent explanation of what a monad is.

People proceed to explain what a Monad is to this person but he hasn't educated himself at all. It's easy to not hear any coherent explanations if you're not even looking for any. Though it's impossible if you reject any valid explanation as "academic circlejerk" or "FP elitism".

The problem is that Monad is an abstract structure from category theory. To give a valid explanation for what it is you have to explain what it is and this means describing it without concrete details of some specific Monad. And reading that description is not enough because you may have to see it with some specific examples of monads to understand it.

Try explain that for somebody who perceives a concept of an identity element for a mathematical operation as unlearnable.

It is an abstract sequencing object that guarantees value equality and associativity of sequenced operations over an encapsulated value.

I'm not sure that's correct, or even wrong.

Brent tells about the inverse problem in his blog post: Abstraction, intuition, and the "monad tutorial fallacy". After you've understood MOnads, before teaching them to others, think about how you learned it in whole, and try to verify you've learned something useful. Pick up a pedagocical toolkit of some kind and smash your tutorial with it. This is something I could work on myself as well sometimes.

Elm makes it easier for newcomers

People are directed toward Elm and while that's not a bad thing, I'm not sure that a style-sensitive parser/compiler clashing and only presenting one error at a time is a feature.

For those interested in functional programming, but maybe intimidated by the legacy accumulated by languages like Haskell, languages such as Elm are making everything they can to make it easier to newcomers.

Yes, I'm aware of that. It's such a wonderful thing to not have typeclasses or anything alternative to them.

In Elm 0.19 you can no longer create tuples with more than three elements. The reasoning behind this is good. Tuples does little to tell you what each parameter means, and so one should prefer records to tuples if you have a bunch of paramters.

A nice feature to discover when using tuples to represent homogeneous coordinates.

I only accept tuples with two or three items. This has too many:

7| type alias Vector4 = (Float, Float, Float, Float)

During the short time I used Elm for something, I wrote an error correcting Earley parser in it as an experiment and knew certain things simply don't happen. I stumbled upon this kind of a problem, lets say you got some piece of code and you know that certain case is impossible and handling it would be incorrect.

rule : EIM -> Parsing -> Rule
rule (_, pos) pars = case Array.get pos pars.dotrules of
    Just (index,off) -> case Array.get index pars.grammar of
        Just a -> a
        Nothing -> unreachable_at_rule
    Nothing -> unreachable_at_rule

unreachable_at_rule : Rule
unreachable_at_rule = ("", [])

Elm forces you to "handle" it anyway because handhondling. You could do this properly in Idris, but in non-dependently typed languages you need something to plug those holes.

In Haskell you have undefined for this purpose.

Elm has a structure for this purpose as well, it's the Debug.todo. I could not use Debug.todo "unreachable" because it forces a Debug build and Debug build prevents publishing.

One of the nice things in functional programming is that you aren't limited to Javascript. In a production quality language you'd like to be able to spread your legs.

Elm apps are fun to toy with as long as you don't get frustrated to shortcomings.

Other languages "consuming" FP features

We have plenty of pragmatists bringing the "FP paradigm" to their mainstream language.

These days significant portions of my C# code is functional, and this will only become easier in C# 9 and, presumably, 10. On one hand this is bringing the paradigm into the mainstream, but on the other hand, as I said earlier, this kills the momentum of challenger functional languages.

To these people functional programming is like a car paint or an art style. Also languages seem to be fragile, living beings that need the manure of an audience.

I'd rather be pragmatic, not dogmatic. If C# has a more idiomatic way of writing that code, I'd choose that over functional. And that's fine and I won't lose sleep over it. Who cares that a functional kitten will be killed over my blasphemy.

This viewpoint is common and it's also prevalent with fresh Haskell programmers despite that Haskell has been there for three decades already, resembling a thorny bush sprouting roots that keep it aground for at least another three decades.

The long history of programming languages shows that a language can't really survive without the industrial adoption. All the languages which were only academic toys became completely abandoned after a decade or maybe two. The languages which are refusing to interact with the mainstream are doomed to die slowly and inevitably.

Haskell programmers likely don't need to worry about this. Haskell will be still around when it is hopelessly outmoded and is compared to perforating punching cards.

Already established languages just dying is rare. I think there are few events but lets examine few instances that I happen to remember well.

Languages built relying on implementation details of a particular platform, company or specific hardware seem quite perishable. I wonder why that is so? Shouldn't programming be quite a lot like mathematics and not need weekly remote updates?

Mathematically inclined languages are quite robust because the semantics are more abstract and do not depend on a particular framework or to a specific hardware.

Lets resume on the functional programming being a fashion statement. I'd say this is true to an extent because it's not the things you guys copy from functional programming that matters really.

Here's the thing you should start copying to your language

Pure typed functional and logic programming languages have some extremely solid mathematical foundations and future revisions of them are just going to improve on this front. This means that they're relatively simple to implement and algorithms related to them have an edge of being more or less formally verifiable.

The big thing in FP is the good connection to proof theory through Curry-Howard-Lambek correspondence. It happens in an every programming language but in pure typed functional programming languages this correspondence is recognized and put to an use. Proofs are functions and types are propositions.

Programming through proof theoretic concepts is easy to learn, much easier than imperative programming style.

It starts by being able to read types as propositions. For example the following expressions:

a -> b
(a, b)

The first one says "after a proof is presented for a, this is a proof for b".

The second one says "this is a proof for a and b".

From there you can start understanding the content in propositions formed by types. For example here:

first  : (a, b) -> a
second : (a, b) -> b

"assuming a proof for a and b, first is a proof for a."

New proofs are constructed by composing them from existing proofs. For example if you have a pair of proofs, you can apply first to it.

first items : a

And you can compose functions together:

form_pair : something -> (a, b)

first ∘ form_pair : something -> a
second ∘ form_pair : something -> b

Programming turns out being this kind of composition of things, but note a very crucial thing. These types do not necessarily refer to actual implementation details. For all you know the (a,b) might not appear in the original program. Also such general concept of a pair might map into multiple representations on the hardware.

Once you've copied these ideas to your programming language, your language is a purely typed functional language.

All is not lost for a C# programmer struggling to stay relevant though. You could translate Haskell programs into C# programs and keep using it as an infrastructure. In fact any existing infrastructure with garbage collection can be easily repurposed in this way more or less.

You're already seeing this as purescript implements Haskell on top of Javascript. Write a program carefully and it keeps running on GHC while it still runs on purescript as well.

FP only for Bottom-up design

I think the problem with OOP is that it naturally lends itself towards top down design while FP languages go the other way.

If you have a typed pure functional language you can freely propose structure before you implement anything of it. For example:

type Set a
insert : a -> Set a -> Set a
isEmpty : Set a -> Bool
length : Set a -> Nat

To run the program you need to implement these interfaces. But types themselves tell a lot about the program whereas programs themselves have formal proof content. For quick thought experiments just plain type declarations could suffice, especially if you know that some of those implementations are going to be straightforward.

Functional programs are harder to debug

I don't know much about difficulty of debugging, and I wouldn't be ready to say either way of whether it's more difficult with functional programming or not.

I am used to printing expressions to debug the program but in Haskell I usually just throw things into REPL to see whether they do what I expect. Most commonly I check the type of an expression to figure what that piece of a program is doing.

Haskell REPL refuses to print everything you throw at it. There's a technical challenge to this that isn't absent but ignored in other languages.

Printout of alternatives and records are trivial, but when you have functions and alternatives, it gets considerably harder to turn the program back into an expression. This can be done but it's a specialized operation and the evaluation environment must be prepared to do that.

If you're trying to turn a → b back into an expression, It can be done if the parameters are first turned into continuations (a → r) → r) → ((b → r) → r). Make "r" your printout and give it an argument that produces a case expression if the expression is being accessed.

But note that Haskell also has those infinitely recursive expressions and functions aren't necessarily originally written expressions. Some of these details may complicate printouts.

Every type is a potential user interface. You could capture a value and probe it through. But then it's more like debugging, and might form a simpler basis for debugging than traditionally is possible.

In some sense traditional programming languages are "easier to debug" because they conform to evaluation order of some abstract machine that's either described by hardware or by Guido. Functional programming doesn't describe an evaluation order and you can select an arbitrary evaluation order given that the normalisation in calculus of your language is confluent.

It's a dumb thing to assume an evaluation order during programming and I think Haskell does lazy just because it gives "infinite data structures" without having to notate it. That doesn't gain much because the notation for "coinductive" structures isn't difficult and doesn't introduce any shortcomings to their use.

Functional is harder to debug. It's been around for 60 or so years and has yet to catch on mainstream because of this limit. Imperative code allows splitting up parts into a fractal-like pattern where each fractal element is (more) examine-able on it's own, due to step-wise refinement

Step-wise refinement assumes that your program "steps through", and these "steps" would be easy to examine individually, except that they're tied to the global state somehow and their types do not reveal how they tie up together.

In purely typed functional programming correctness composes. That's the whole point of corresponding a language on mathematical logic!

h ∘ g ∘ f

If you had a program composed from 3 functions like this, the following holds. Given that h, g and f are correct, then the whole program is correct. If the whole program isn't correct, then some of the smaller programs are incorrect.

How to debug that? You make a guess of which one is wrong, then you examine whether that part of the program is wrong and examine how pieces of that evaluate. This is a relatively easy thing to do but isn't relying to stepping through the running program.

The point in pure typed functional programming is that you'd be able to predict what a program is going to do because it reads in the type and that type is considerably easier to interpret than the program.

FP adoption/public acceptance problem

In "Why no one uses functional languages" Wadler is hopeful that people do not reject functional languages because they're stupid. Your mileage may vary.

I was rejecting pure functional programming for a while because I thought that typed programs require that you compile & fight the typechecker & evaluate the program and then repeat. I also kept the program code itself in main role, that was the thing that describes what the program is doing.

Then I got interested about formal verification because it's tied to optimizing programs. I learn that types behave like specifications and you can describe lot of different conditions for correctness with a type. Also it's not as hard as designing a less suprising version of Python that can be optimized.

That was stupid from me.

Similar posts