Case against OOP is understated, not overstated

Here's something for nobody. We've been past this a long while ago now. OOP is one of those substances that stick to the wall if you throw it. Writing about it is almost always pointless because people who should learn refuse to learn, and everybody else has learnt their lesson.

I review and criticize "The Case Against OOP is Wildly Overstated" by Matthew MacDonald. The article itself references few other posts and I give the same treatment to those. You may have seen these before:

  1. Object-Oriented Programming --- The Trillion Dollar Disaster by Ilya SuzdaInitski
  2. Goodbye, Object Oriented Programming by Charles Scalfani
  3. Why OOP is bad by Konrad Musial
  4. OOP is dead by Karsten Wagner

I go through the main points of these posts so that you don't need to read them. Additionally we'll have:

Not every weekend you get to see such a jewel in the sewer that's the Internet. Lets peek in!

Micro-summaries/reviews of OOP posts

On each post I'll go through the main points they had to say. Ilya's post was largest of them all with 27min read, the second largest was Karsten's post.

I've got my opinions inserted in and the things I pick up form the basis for subjects that the rest of this post covers.

The trillion dollar disaster

Ilya SuzdaInitski makes lot of claims but doesn't bother to present evidence. He makes that up by doing a lot and lot of claims. There are plenty of references to popular anti-OOP stuff so it's not a total loss. Also it's a structured post that's easy to skim unlike the others in this bunch.

The high point in this post is Edsger W. Dijkstra's quote "Object oriented programs are offered as alternatives to correct ones..." I chuckled at that one, yeah Dijkstra obsessed over correctness and I guess I've ended up to doing that as well.

All the claims:

  1. There's no evidence that OOP is better than plain procedural programming.
  2. Programming paradigms should constrain bad programmers from doing too much damage.
  3. Object oriented programming was supposed to be about messages and actors, rather than about objects and methods.
  4. OOP fails to keep the complexity because of shared mutable state, errorneous abstractions and low signal-to-noise ratio.
  5. Shared mutable state is hard to track and causes concurrency issues.
  6. Encapsulation is a trojan horse hiding mutable state.
  7. OOP tries to model the real world as objects and class trees through inheritance.
  8. OOP is difficult to unit test.
  9. Forms heavy dependencies between classes unless you create interfaces everywhere and then mock them.
  10. Difficult to refactor without tools.
  11. Mentions design patterns, SOLID, dependency injection as band-aids to OOP.
  12. Mentions abstraction, inheritance, encapsulation, polymorphism as four pillars of OOP with intent to refute these.
  13. OOP is popular due to Java.
  14. It's time to move on.
  15. You're already a functional programmer and learning functional programming makes you better.
  16. Usual defensive arguments are weak and probably never met a true functional language.
  17. Talks about Law of Demeter as useless under-the-rug-sweep.
  18. People try to discredit anything that claim OOP sucks.

Notable references:

  1. Mentions functional programming and Linus Tolvalds hating on C++ programming.
  2. Alan Kay's famous quote and refers to Erlang as a "pure form" implementation of OOP.
  3. Stevey Yegge's blogpost "Execution in the Kingdom of Nouns". I prefer the PDF version of Yegge's post. It's criticizing Java programming language for sticking to OOP. I somehow remembered you could find it from the old WikiWikiWeb but I didn't find it there. Just thought it might be fun to remember that site.
  4. Reference to problem factory. I'm sure this is a reference. I just don't know this one. Help welcome! Remind me where the problem factory -term originated from? Yes, there's a design pattern called 'factory', I don't ask about that.
  5. Reference to Joe Armstrong's "Banana, Gorilla, Jungle" -quote.
  6. Transition from horses to automobiles used as argumentation device.

First of all, Alan Kay's remark about classes and objects cannot be used against OOP. Classes and objects were featured in Simula programming language and it went along from there. That it was inspired by something potentially better doesn't demerit it. OOP is a datatype customization feature with an overinflated ego. It exposes decades old implementation details and makes them a principal model where you do programming.

The conception that you discard shared mutable state when you stop doing OOP is the thing that keeps people in. You can do "immutable" OOP and it's not any better!

Taxonomies and attempts to understand the world through them aren't OOP's problem. Besides when you pick this up they point out that you're not supposed to write classes such as a "Car" or a "Banana" and they're just as wrong as Ilya is wrong claiming the opposite.

OOP was verbose from the beginning and it didn't prevent it from going. You're supposed to buy stuff with that verbosity so bickering about "low-signal-to-noise" ratio receives eye rolls only.

They're going to tell you that they're using tools for refactoring and just declare interfaces everywhere so that it can be unit tested. IDE refactors the boilerplate code so they don't worry about it. Interfaces and mocks just everywhere and it is not a problem.

On claims of unit testing and OOP, I'm not going to go there much more because I still don't unit test my stuff. I'm currently not against it. I just don't know about it much. I find it much easier to formally verify things correct than to test that they're correct.

OOP precedes Java. C++ was raging hot popular object oriented language before Java became popular.

Goodbye, Object Oriented Programming

Charles Scalfani was "gung-ho to leverage the benefits of Inheritance, Encapsulation and Polymorphism". He was disappointed that the planes didn't land to his backyard.

  1. Half the post is about the banana-monkey-jungle problem.
  2. Other half is about the fragile base class and contain-and-delegate as a solution to it.
  3. Categorical hierarchies (taxonomies) don't work for programming?
  4. Encapsulation doesn't work because it hides stateful variables.
  5. You don't need object oriented programming for polymoprhism. Presents interface-based polymorphism as an alternative.
  6. Shills Elm. Lol. (Scalfani's post is from 2006)

These guys attack the pillars of OOP a lot. This is why I did look into Grady Booch's book.

I guess it'll be also time to talk about these taxonomies and maps. Is this going to be the programmer equivalent of bees and flowers talk?

The fragile base class -problem seem to be well-covered and settled and has not affected the remaining discussion, so I didn't cover that one here. What it's about: Seemingly safe modifications to a base class may cause the derived classes to malfunction. The programmer cannot determine whether base class modification is safe simply by examining the base class in isolation.

He mentions interface-based polymorphism as an alternative but doesn't say what it is or link to anything!

Elm has become a laughing stock. They skipped typeclasses in hopes that something better appears. So far they're still waiting for it and they got full libraries. They've done plenty of things to make it simple for a beginner but when it comes to retaining users, LOL. The language enforces its own best practices and it's getting to your way by limiting the size of tuples that you can make, tripping your homogeneous coordinate construction, and banning undefined/absurd in release.

Here's absurd from Idris, so you get some idea what they prevent in release.

absurd : Uninhabited t => t -> a

Non-dependent languages don't have this, but they got undefined for the same purpose. Elm has this too but it's in debug module and prevents its use in release. It's very fun to wrap the function into a maybe and handle it without the maybe monad, or come up with a placeholder value, when you know for certain that it's something that means the program has something very badly wrong if it happens. Nah, it's Nothing, tells Elm!

Why OOP is bad

Konrad Musial tells how he used to be excited about OOP. He learned about circles and ellipses as objects with properties. This is a story about struggling to understand OOP. As an example he throws in a bit of C#. Glancing at this code, it's not particularly complex but sprinkled with attributes like this [Serializable] here.

[Serializable]
public class SampleClass {}

These aren't technically even OOP. They're C#'s way to give code additional declarations and structure that can be used for metaprogramming. It's one of the flagship features of C#, something you should definitely know how to use if you're going to use that language.

The post is filled with popular anti-OOP quotes and in the end he tells us he figured it out and went back to doing OOP. Week or two later he wrote "Why OOP is Awesome". That you don't understand something doesn't mean it's flawed or bad.

This one is a prose-formed text and it's not the only. I struggled to go through these despite them being shortest in the bunch. It requires that I read the whole text through in at an one throw. I just recently figured out myself the pieces I was missing to writing effortlessly skimmable texts. I don't say that I perfected it but you're reading one text that's been almost written in this style.

OOP is dead

Karsten Wagner thinks OOP reached its peak and is on the decline. Interest is increasing toward functional programming languages and concepts such as closures and continuations. To respond, languages that used to be "OO" have begun to integrate new features into themselves.

  1. States new features do not necessarily ease software development.
  2. There will be too many features and mixing them up will be worse than using handful of them consistently.
  3. Functional programming is doing it better, there pattern matching replaces multiple dispatch.
  4. Thinks OOP failed to live up to its promise and lists few reasons.
  5. Shills multiple dispatch as a solution to float~int conversion in addition.
  6. You can use relational data models instead of wrapping things into objects.
  7. Believes people start using OOP-languages in non-OOP way.
  8. It's possible to write interactive programs without having mutation of data. Mentions Monads.
  9. Boasts referential transparency as the important thing about functional programming.

List of reasons why he thinks OOP failed to live up to its promise:

  1. "this" -parameter in the method call is too special. Mentions problems that arise when you have to act on multiple parameters.
  2. Points out you can't give your own .trimLeft to a String -class when you don't implement the String class. You got to create String.trimLeft instead.
  3. Tells about monkey-dispatch in Python, to add things into a class as an afterthought is bringing up its own problems.
  4. Picks mutable state issues up. Points out mishandling of mutable state doesn't happen often in OOP, but when it does it's making up for that in how wrecking it is.
  5. Optimization of OOP code increases it's complexity a lot.
  6. Object-hierarchies may end up being cyclic, forming structures that are very difficult to maintain. States you can handle this with tooling but questions whether the complexity is necessary.

I think in certain groups use of OOP has declined. There are a lot more people who understand type theory and formal verification than there were 14 years ago. Haskell finally ranks #40 on TIOBE index!

In big scale OOP is doing just great because there are more programmers than ever! They're going through Uncle Bob's night reading, learning about design patterns, SOLID and everything else OOP that sounds great. It is also great time for OOP in programming languages. Popular languages such as Javascript and Python are steered by their communities in a democratic resolution that relies on dialogue.

I was also in belief that people would start using OOP languages in non-OOP way but that hasn't entirely happened yet. Here we are still discussing OOP and we haven't gotten over it yet..

The rigidity of methods given to a class is a real problem but it's usually ignored. Maybe it's not seen as a big problem because you have to import your own trimLeft from a module anyway.

When you write interactive programs with monads, it doesn't go the way that mutation would disappear. Monadic IO pushes the mutable structures to the edges of the program but you still have them or something like it. I've explained this in "Understand IO Monad and implement it yourself in Haskell".

He seem to confuse that pattern matching would replace multiple dispatch and it doesn't actually work that way. Multiple dispatch also doesn't work and the worst thing it'll be only apparent and gets worse after you rely on it more, I tried that in my lever programming language and it went badly.

At least we've figured out already 14 years ago that referential transparency is important! I'm glad about that. Now we just need to get somebody to chant "mathematical expressions!" in front of developers.

The Case Against OOP is Wildly Overstated

Matthew MacDonald's take is that you can't rule without attracting enemies. Just look at all these blog posts by various people and more behind the curtains.

  1. Doubts that sloppy design practices and fuzzy architectural thinking would be unavoidable parts of OOP.
  2. States, correctly, that OOP isn't supposed to model the real world.
  3. Object-relational mapping is exemplified as an antipattern.
  4. Eloquent Javascript advice: Pick the simplest approach that meets the need.
  5. States that software design is hard to do right, no matter the tools.
  6. Design patterns can result in a mess, tells to instead focus on the Don't Repeat Yourself and You Ain't Gonna Need It, Law of Demeter (restrict what classes must know about each other), and valuing simplicity and readability above all else.
  7. Points OOP inheritance is the weakest link and attacked often. They're right, be careful of using it.
  8. OOP doesn't prevent you from applying the wrong solution to a problem.
  9. We'll see if Go and Rust steals the crown in the next decade.
  10. Agrees that OOP is indeed fading in domination, probably.

I got relatively little to say about this post itself. The title has been chosen fairly well as it accurately presents author's opinion, sans wildly. The author actually seem to agree there's a case against OOP although says it's overstated.

I'm glad people finally figured out that ORM sucks.

There's the usual claim that OOP isn't supposed to model the real world. This came up into OOP/anti-OOP discussion when it became more apparent that people followed up with what they were taught in a rather strict manner. The pretense that you weren't supposed to follow up with what you were taught. I still remember my own principled OOP-calculator written in C++. Hah. I remember how somebody commended it in IRC. They're all wrong about it either way. Forming a taxonomical model is ok if it participates to solving the problem at hand.

The advice about not blaming your tools is good advice, don't blame your tools... leave that to me. I am a professional tool-blamer!

Grady Booch's book: Object-oriented Analysis and Design with Applications

According to Quora, the 4 pillars of OOP are claimed to originate from the book "Object-oriented Analysis and Design with Applications" by Grady Booch, published by Addison-Wesley Professional in 1990.

Highlights:

  1. There are well-done comical illustrations sprinkled through.
  2. He already addresses the thing about categorical hierarchies in this book. The book talks about identifying key abstractions. It was already recognized here that plain taxonomies doesn't work for abstraction, simply because there's multiple of them that are all valid.
  3. Probably something else interesting would be in there if I bothered to read deeper.
  4. It's better than many later OOP books. It comes with a long references section and a large glossary.

I got my hands on the second edition published in 1994 and I looked in to see what Booch means with abstraction, encapsulation, inheritance and polymorphism.

I'm also interested about how the book treats class/object -structures and programming languages.. If I were smarter than I am, I might have went deeper on this regard.

4 pillars of OOP

I don't bother to search deep into this book but at least there's a glossary. It explains these terms! We can't treat any book as a foundation anyway, but we get some reference points.

abstraction ""The essential characteristics of an object that distinguish it from all other kinds of objects and thus provide crisply-defined conceptual boundaries relative to the perspective of the viewer; the process of focusing upon the essential characteristics of an object. Abstraction is one of the fundamental objects of the object model.""

encapsulation ""The process of compartmentalizing the elements of an abstraction that constitute its structure and behavior; encapsulation serves to separate the contractual interface of an abstraction and its implementation.""

inheritance ""A relationship among classes, wherein one class shares the structure or behavior defined in one (single inheritance) or more (multiple inheritance) other classes. Inheritance defines an "is-a" hierarchy among classes in which a subclass inherits from one or more generalized superclasses; a subclass typically specializes its superclasses by augmenting or redefining existing structure and behavior.""

polymorphism ""A concept in type theory, according to which a name (such as variable declaration) may denote objects of many different classes that are related by some common superclass; thus, any object denoted by this name is able to respond to some common set of operations in different ways.""

What does Booch think of OOP these days? There's a interview with Booch in 2009. Back then the guy still admitted to using Java and PHP with Eclipse.

Booch's treatment of programming languages

There's a list in the book: Weigner's classification of more popular high-order programming languages into generations arranged according to language features they first introduced:

First-Generation languages

  1. FORTRANI (mathematical expressions)
  2. ALGOL 58 (mathematical expressions)
  3. Flowmatic (mathematical expressions)
  4. IPL V (mathematical expressions)

Second-generation languages:

  1. FORTRANII (subroutines, separate compilation)
  2. ALGOL 60 (Block structure, data types)
  3. COBOL (Data description, file handling)
  4. Lisp (List processing, pointers, garbage collection)

Third-generation languages:

  1. PL/1 (FORTRAN + ALGOL + COBOL)
  2. ALGOL 68 (Rigorous successor to ALGOL 60)
  3. Pascal (Simple successor to ALGOL 60)
  4. Simula (Classes, data abstraction)

The generation gap (1970-1980)

  1. Many different languages were invented, but few endured. [2]

I find the first-generation languages hilarious. They're right on the money if I'd believe this list was accurate. The positioning of Lisp is pretty funny as well.

I'm not sure, but perhaps Booch took the shape of the programming language as granted? "These are the means of abstraction I get and I better make best use of them". Unfortunately I didn't find any support for this idea, otherwise it'd settle the whole debate around OOP.

Otherwise I really like how this book is structured. It's a great example of a book as the glossary and references section doesn't look like they were a Caecum. I'm likely returning to take more notes of how it delivers its content.

The delusion is strong binding force in Nounland

The whole point of this post is that hey,

  1. Inheritance was supposed to be an important pillar but now it's rolling on the floor?
  2. Are you sure about polymorphism? First of all you took it from type theory and that's itself getting popular featuring stable forms of parametric polymorphism, while your version of polymorphism is shifting shape like crazy.
  3. With only two pillars standing, OOP is seeming more of a balancing act rather than an architectural wonder it was supposed to be.
  4. There's a whole mobile phone software industry standing on Java which has heavy object oriented foundations.

When people start to move the poles you might be mistaken that the whole object oriented programming is a circus performance. It's like musical chairs but played with foundational pillars.

It'd be a bit of irony to show the problems with examples from Booch's book, therefore all of the OOP examples here are from that book.

2020-08-03 Addendum to above: [hwayne][hwayne] pointed out that ML cited CLU's type system as inspiration, which cited Simula as inspiration. From a historical perspective polymorphism have migrated from OOP to FP.

[hwayne] : https://lobste.rs/s/bmzgvz/caseagainstoopiswildlyoverstated#cf7arfr

A record with an overinflated ego

Object oriented programming started from the need of greater customization for datatypes. The only form of customization used to come in a form of a record datatype.

struct PersonnelRecord
{
    char  name[100];
    int   socialSecurityNumber;
    char  department[10];
    float salary;
}

When it was recognized that you would want more abstraction and customization into datatypes, classes were born. Classes extend from records by letting you to define methods and structures that are shared between every object.

class PersonnelRecord {
public:
  char* employeeName() const;
  int   employeeSocialSecurityNumber() const;
  char* employeeDepartment() const;
protected:
  char  name[100];
  int   socialSecurityNumber;
  char  department[10];
  float salary;
}

It was considered good engineering practice to encapsulate the state of an object like this. When you separate the access to parameters like this, you can now change the implementation of the record into something else and nobody who is using this object needs to know how it's implemented.

The implementation of the feature was easy, karen.employeeName() just calls some function instead of really accessing a function in a record. It was easy and much cheaper than other things you could do.

Very early on this also gave some namespace around the methods. When you really had nothing else this all must have been looked very great and minimal.

Today it's possible to give you far more distance between hardware than it used to be 30 years ago. Is there any reason why you should build abstractions over flat record structures now?

Inheritance & Polymorphism

I was going to write about inheritance and polymorphism entirely separately, but they're actually provided by the same structure. Inheritance enables the polymorphism.

A common use is to describe variance between different forms of records. It's presented in this example of a base class.

class TelemetryData {
public:
  TelemetryData();
  virtual ~TelemetryData();
  virtual void transmit();
  Time currentTime() const;

protected:
  int id;
  Time timeStamp;
};

The base class describes what you can do to the structure as well as identifies things that every structure share. This structure is then extended to contain more information specific to certain class of structures:

class ElectricalData : public TelemetryData {
public:
  ElectricalData(float v1, float v2, float a1, float a2);
  virtual ~ElectricalData();

  virtual void transmit();

  float currentPower() const;

protected:
  float fuelCell1Voltage, fuelCell2Voltage;
  float fuelCell1Amperes, fuelCell2Amperes;
}

The "virtual" methods are accessed from a virtual method table associated to each class. This results in a pointer that could be used to identify class of an object so that it can be promoted. Using the approach is considered a bad style because classes are supposed to be extensible. The pointer just cannot be used to identify a class directly because you may subclass any class and extend it, receiving a different vtable pointer for it. To identify vtables you have to be able to chain them.

What does it translate to?

For curiosity somebody may ask, how were classes simple to implement? There are several ways to implement classes/objects. You can implement them as a very thin layer above records.

Classes translate down into structures that each have a virtual table pointer in front of them. Note that the pointer is needed because a class extending from a structure may declare its own virtual methods.

struct EmptyClass {
    void *vtable;
};

struct TelemetryData {
    struct EmptyClass super;
    int id;
    Time timeStamp;
};

struct ElectricalData {
    struct TelemetryData super;
    float fuelCell1Voltage, fuelCell2Voltage;
    float fuelCell1Amperes, fuelCell2Amperes;
};

The static methods are referenced directly and translate to plain procedures like these.

TelemetryData_TelemetryData(TelemetryData*);
Time TelemetryData_currentTime(TelemetryData*);

ElectricalData_ElectricalData(ElectricalData*, float v1, float v2, float a1, float a2);
float ElectricalData_currentPower();

If something's declared virtual, it goes into a virtual method table.

struct EmptyClass_vtable {
    // void *vtableParent; /* If the dreaded 'instanceof' is implemented. */
};

struct TelemetryData_vtable {
    struct EmptyClass_vtable super;
    void (*deconstruct)(TelemetryData*);
    void (*transmit)(TelemetryData*);
};

struct ElectricalData_vtable {
    struct TelemetryData_vtable super;
};

static TelemetryData_vtable  vtable_TelemetryData;
static ElectricalData_vtable vtable_ElectricalData;

It's easy to confuse the type of a vtable and the actual vtable, though this itself is not a flaw of any kind and you don't need to worry about how classes and objects are implemented if they've been implemented correctly. Whenever the ElectricalData is constructed, the vtable in it is set to point on (&vtable_ElectricalData).

Closed/Open-definition structures and "instanceof"

Inheritance allows you to build both closed-definition and open-definition structures.

  1. Open-definition structures are structures that you can extend by deriving from them.
  2. Closed-definition structures are defined in a bunch, and you assume it's only one of the possible options that you may receive. No further extension of the base class is expected.

These things should be separate because otherwise they tangle together. To avoid this early OOP languages didn't have the "instanceof" although they could have had that through vtables.

You create a closed structure by tagging it.

enum {
    t_ElectricalData = 0
    t_LightTracking,
    t_DoorLockData
} TelemetryTag;

Then you can require that when the telemetryTag is t_ElectricalData, it's either the ElectricalData or some subclass of it.

if (telem.tag == t_ElectricalData) {
    ElectricalData* elec = (ElectricalData*)telem;
    /* Do something with it.. */
}

This changed when Java introduced the instanceof, it lets you to be convenient and do it like this:

if (telem isinstanceof ElectricalData) {
    ElectricalData elec = (ElectricalData)telem;
    /* access elec */
}

instanceof immediately became a dreaded and abused feature of object oriented programming. I guess they did it because Java also introduced garbage collection and this was an ancillary detail of an otherwise safer memory management or a newly available object-introspection tool. Ignorance of the problems introduced by this feature took care of the rest.

If you look this up, I could link it to here if you inform me why Java introduced instanceof?

Fake abstraction

That we're slapping features together like this results in fragility on its own. The way these structures are used are tightly wound to how they're implemented by the compiler. This is not how abstraction is supposed to work, but I let you pretend it's intact for courtesy.

This tradition of fake abstraction is followed through in Java and C#. They come up with their own virtual machines and instead of translating across multiple platforms like a well-typed and compiled language otherwise could, they refuse to work on anything else than their virtual machines provided along them. In this regard you find a typed, compiled language but it behaves just like an untyped language such as Python or Javascript.

Uncontrolled polymorphism

Virtual class methods provide polymorphism and allow you to select behavior at runtime. There's bit of a problem because this form of polymorphism is arbitrary. It means that you can do about anything without constraints. This would be otherwise a good thing, but you won't know which of them will result in good behavior of the program. If you don't know that then you could as well not have it.

Besides the rules for building well-formed polymorphic programs in object oriented programs are complex, involving ideas such as covariance and contravariance Turns out that often you, or neither often your superiors know exactly how an OO-program should use polymorphism. You still use this feature though!

Covariance and Contravariance

The object oriented programming builds on subtyping. The subtyping means that when somebody asks for a Cat, you can give him a CatDog and he gets to interface with the Cat -part. You can pass more information in than is exactly required, likewise you may be answered with more information than you requested.

Types get ordering based on how much "information" they contain. When this ordering is preserved, the things are said to be covariant. Eg. Somebody provides a Cat and you want an Animal. If the ordering is reversed, such as in you have to pass in an Animal and you provide in a Cat, then it's contravariant. It's bivariant if it needs to pass and receive Cats. It's invariant if it's irrelevant whether it's a Cat.

These things are very easy to confuse to the point that I'm not sure if I just did. If you get them wrong then your polymorphism just blows up.

Parametric polymorphism

There's a fairly simple way to write well-behaving polymorphic programs. The trick is to enforce that polymorphic programs treat their polymorphic parts uniformly. When polymorphic programs aren't allowed to look inside the structures they manipulate, then it's well-defined how they're manipulating those structures. This is known as parametric polymorphism and it's a common style in functional programming languages for polymorphism. For example, when you meet a function such as:

a → a

You know that the function cannot access the insides of a in any way, it must go through the function. However when you give in an additional function like this:

(a → a) → (a → a)

You know that a function of this type may send the a through the second function zero or many times. It's much less hard to operate and reason about objects that are consistently what they need to be.

Features that break parametric polymorphism

The neat thing about parametric polymorphism is that it ends up being programming language designer's fault if it ends up broken. It's no longer programmers fault.

The easiest way to break parametric polymorphism is to introduce an implicit Monad "join", this also destroys the monad -part of the construct.

maybe (maybe a)     → maybe a
promise (promise a) → promise a
array (array a)     → array a

The first one is often broken by introducing a Nothing constant and leaving out the Just(a) for convenience, or giving implicit Null to every structure constructed with a pointer so that it's easy to initialize. This results in being unable to distinguish between Nothing and Just Nothing, which breaks parametric polymorphism on the variables wrapped with these structures. If maybe a or a? receives a "null" and a happens to be maybe something, then the higher-up structure catches the null. This is akin to the problems of isinstance as the information in a is suddenly being identified and interpreted in an unpredictable way.

It's lot more uncommon to see the latter two broken. You might see the arrays being broken on some early programming languages. The promise was broken in Javascript and there's a whole issue for it, the dreaded issue 94.

Why pattern matching doesn't dynamic dispatch

Since polymorphic programs aren't allowed to look inside their parameters in functional programming, it also means that pattern matching cannot be used to dispatch on structures.

Patterns in a functional programming language are structures that have a closed definition. This means that their definition completely determines how many ways there are to construct them.

data Form = A | B | C

When the structure is examined, well-formed program is required to handle every possible case that arises.

case form of
    A -> 1
    B -> 2
    C -> 3

This separation verifies that you have a much simpler model for building programs. It partially replaces the inheritance of object oriented programming though, you'll be able to create those close-definition objects in this way.

But how do they dynamic dispatch then?

Dynamic dispatch actually resembles passing modules or functions along the arguments. Technically it's not any different to the OOP, but the virtual table is an explicitly described construct.

You might have a record that provides you variety of ways to manipulate the structures, the a is a parameter that can be chosen by the user:

record Arithmetic a = {
    show    : a → string,
    (+)     : a → a → a,
    (-)     : a → a → a,
    literal : Integer → a }

These records can be then passed around into a function that does something abstracted over arithmetic that it does.

Arithmetic a → a → a

You may notice how it resembles the virtual table example earlier.

Better ways to do it

Functional programming is itself getting highjacked by the same consultants who rode the wave and pushed OOP. Things I'm going to present here should instead be taken as promotion or shilling of dependent type theory and formal-logic-based programming.

I left mutability out of discussion because there's no additional problems with mutability when it comes to functional programming. The reason why this is perceived as a problem is due to precision functional programming requires from the programmer. Jean-Yves Girard and many other mathematicians took care of that a long while ago. Besides if you immediately need what you already get from OOP, then you can get such a similar mess with mutable references for instance in Haskell.

(Mis?)conception about types

There's an old conception around types that I were reminded of while reading Grady Booch's book as I saw a picture of a dude trying to plug a statue of number 3 into a hole. The statue was labeled with what it was signifying, and the hole signified something else.

The idea is that the use of types is to ensure you don't mix up things such as eg. "number of chicken" to "how much a chicken costs". That's almost a type-theoretic idea though. A better example would be mixing up 5 dollars and 5 euros.

The point of types according to this explanation would be that it prevents from mixing things up. It's almost correct but slightly wrong. It also drives you to think of subtyping hierarchies like this:

dollar extends money
euro extends money

The example isn't giving types the attention they deserve though. It's awful lot of effort to build separate number types just to verify that we don't mix money units. Very few people are doing that.

Instead there's a thing you might do with types. A type verifies that if you need some structure then the given structure is indeed what you're expecting. We can exploit this property by asking for very fancy things and then demonstrate that you have them.

For example, you can construct a type that states that the sum of angles of a triangle is 180 degrees. The structure would be a proof that proves the proposition. Therefore when you have such a structure, then you know that in your model of a triangle the angles sum to 180 degrees.

Both procedural and functional programming languages alike allow some logical reasoning based on types. The difference is that from functional programming languages the step-up to type-theory is as easy as abc.

Referential transparency

Referential transparency is a potential property of a computer program. A referentially transparent program can be replaced by it's value without changing it's behavior. Lets say that x and y are programs, and you know they're equal, this is written as x = y. The meaning of this is same as in mathematics. It means that x and y have the same value. If both of them are referentially transparent, then you can rewrite x to y anywhere in the program and vice versa.

For a programmer this mainly means that you need to separate side effects, or "behavior", from the reduction rules. It enables you to do equational reasoning though! The equational reasoning is the thing where you equate things together to walk a trail in order to verify something, basically things that everybody learnt in schools.

The multiple dispatch issues

Karsten proposed you'd do multiple dispatch. This used to be a popular idea. I think somebody figured it was bad but nobody listened that guy. Anyway, if a programming language has a multiple dispatch and it's used something like this:

add(int, int)
add(float, int)
add(int, float)
add(float, float)
...

I'd advice you to stay far away from it for your own good, unless you really know that it is extensible for real. Dynamic dispatch is too limited and becomes difficult with parametric types that are inevitable if you do physics computations. It's very likely it won't support the needs of a computer algebra system, and it won't provide interoperability you need.

To see where the problem is, just think about this: If int/float were separate modules that do not depend on each other, where the (int,float) and (float,int) pairs should be defined? Nowhere? Somewhere? In either one of the modules but why?

Taxonomies, categorical hierarchies and maps

Modern OOP texts demonize these categorical hierarchies because they make the most embarrasing and entertaining counter-examples for object oriented programming. Taxonomies themselves aren't too bad. They only become problematic when you pretend you only have one valid way to group things together. It's an unusual problem to have unless you do OOP.

It's really similar to mappings or projections in this sense. A lot of effort has been spent to find different ways to flatten a globe so that you could create maps for it. Some projections preserve areas, others preserve distances or angles. People generally, except very few of us, do not have issues with interpreting maps.

Proper application of type theory doesn't prevent you from picking only one when it comes to a taxonomy of some kind. If a representation for something becomes inconvenient, you can switch into an isomorphic representation.

Usually isomorphism relates two functions like this:

f : a → b
g : b → a

They're made isomorphic by verifying that their compositions form functions that do nothing. That g.f is an identity for a and f.g is the identity for b.

Turns out, isomorhisms allow you to switch between equivalent definitions. It means you don't need to stick to any specific categorization of things and treat it as an absolute representation. Putting a function in between that preserves the shape keeps it the same. Type-checked value is like a cat, it sits if it fits.

Logic programming hidden in sight

This is here in case you still need some coinvincing that it's the EOL for OOP paradigm. If you stop subtyping then you lose the convenience of subtyping entirely. Though, in return you get something even more convenient back.

The types-as-propositions -correspondence means that types are valid terms in a logic programming environment. This is already used in Haskell with typeclasses. Instance declarations like these can be interpreted as logic programs. It's similar to haskell's corresponding program, except that there's a construction tied to it. Above there's the haskell instance declaration and below is the closest corresponding Prolog program.

instance Show String
instance Show Int
instance (Show a, Show b) => Show (a,b)
instance Show a => Show [a]

show(string).
show(int).
show(pair(A,B)) :- show(A), show(B).
show(list(A)) :- show(A).

When something queries for a type constraint such as (Show [(Int, String)]), the GHC compiler can be interpreted to run a proof search where the returned "proof" is a fully constructed instance to satisfy the constraint. The requirement for this kind of system to work well is that any result produced by the inference is as acceptable as any other result it could produce. To enforce this in Haskell functionality has been limited to something that you can expect to produce an unique result. Though there you see a computer building parts of the program for you because they're obvious.

The similarity with Prolog and Haskell's type checker is not a new observation either. Thomas Hallgren wrote a paper about it, "Fun with Functional Dependencies" [pdf], 20 years ago. This paper illustrates how Haskell's type class system can be used to express decidable computations at compile-time, the most elaborate example given there is a static implementation of insertion-sort.

These features aren't easily "ported" to procedural or object oriented programming environments because they rely on the consistency that comes with stricter application of type theory.

Mathematical explanation for why OOP sucks a big time

There's a potential mathematical reason for why OOP is giving us such a bad time and we're writing about it every once and then. It has to do with the open-ended rules left into popular languages. When OOP languages come with a type system, they prevent you from doing some dumb things but still let you do whole lot of idiotic things. It eventually results in code breaking when it's combined in different ways. This elicites a response from a programmer to cope with it and he writes code with the strictest possible interfaces that you can come up with.

You'll see the Java and C# even support this and make it inconvenient to write abstract variables and convenient to throw in few ints and floats, although these are quite close to the machine implementation. 32bit IEEE754 floating points do not satisfy common algebraic laws you'd expect from real numbers for instance. Integers are usually machine integers that have limits in their range and they behave like modular arithmetic instead of the usual arithmetic. When you've selected a type like this, often you've closed off many other possible representations early on.

In functional programming you just say "a" if it's something that goes through your system intact. That's as abstract as it can be and allows many different variations of the same program to be constructed for very little effort.

Similar posts