Software complexity

I had to improve things in my mail server and that reminded me that computing software tends to be complex.

I've already picked my mail server by how hard it was to work with. I ended up getting Haraka because I could reprogram it in coffeescript to do my errands. It was a good choice, but I tend to forget how it works at times.

I had dropped the popular linux MTA because it contained countless number of half-documented configuration options, up to the point that I couldn't comprehend it. It's something I don't understand about stuff like LAMP stack. What's the fucking point of open source software if you end up treating it like a magic box anyway? Shouldn't you stride to keep stuff in your hands, rather than let the rope steer your ship?

In the operating system I am using now, you can see countless number of half-documented configuration options. There's probably more of configuration options than what you find in an industrial steel-melting factory. It bothers me that what we do with computers tends to be much more simpler than the computers themselves. It makes computing a constant balancing on a line of panacea with soviet dream of endless supply of employment under in case we fall off that line.

I don't like that it's the default state of software. But I don't see it being fixed because the cause of this is partially unknown to me.

It got to be something to do with static vs. dynamic typing. It might be a viewpoint of ideology.

To compare these two kinds of ideologies. Imagine you've got something interrupted by sudden power loss. The perfection ideology proponents would ensure power loss won't happen. The risk-management ideology proporents would ensure that the system recovers to doing what it was doing after the power is restored.

Computer system memory is sequence of numbers like it always was. The computer processing unit is still addressing that memory and doing operations on it. It's not fundamentally different from what computers in 1980 used to do.

Modern computing assigns type labels for chunks of memory. Programmers see files, numbers, lookup tables, lists, text strings, network addresses, database connections, database records and so on. Lots of this penetrates to the user interfaces. You can see the concepts programmers see.

Type labellings aren't provided by operating system. They need to be provided by a programmer. Today it's common that you provide runtime typing - the final program has some idea of a type. Partially because user interfaces tend to expect something like it.

There was a period when runtime typing was too expensive to include in every program. The types were removed from the program by compiling it into machine code. This popularized the concept of compile-time typing.

Compile-time type systems are complex to write and complex to handle. They are often designed to prevent you from running the program if there's a type error in the program.

Compile-time typing reveals type labels and relations between code ahead of the time. It makes it easier for the editor to provide automatic renaming and restructuring across the whole project and link to the documentation relevant for different types.

Realising that compile-time typing helps keep the program more correct, many people prefer mandatory compile-time typing on top of run-time typing. If they get an error, they will run a debugger to figure out where it is. Then they will fix the error and recompile the whole program.

The errors are made expensive. The ideology is that if a rocket crashes because of computing error, it's horrible. Rockets have crashed because of that anyway. There's a belief that the mathematics can be used to prove the program correct. People submitting to this ideology are ready to add lots of complexity to eliminate any kind of errors.

Mathematics can be used to prove something is true or predict things. As long as you know enough on the matter. You can't prove who murdered the Lady Adelitono if all you know is that she is a woman and she does not breath.

So you introduce complexity into all of programming to prevent errors, but you can't get rid of all of the errors because everything cannot be predicted. The complexity makes every remaining error more expensive to correct.

Sure if you can eliminate an error by doing something different without ill effects, by all means do so. Often making errors cheap is even cheaper though.

In real life making errors cheap is called "risk management", and every large company is doing it to keep their businesses floating. The benefit is that you can do lots more errors before you fail that way.

The way you handle errors massively effects the complexity of your software. And I think it's how the dynamically typed programs can be used to write much shorter and simpler software than what you're used to.

Why would you prevent program from running if it doesn't compile? Running programs are ALWAYS better than type-correct programs, because you can run them even if they don't entirely work.

Also, cute interfaces beat reliability. Say you get an error. You much rather have your nice interface tell you that there was the specific kind of error, rather than face kernel panic. Nice interfaces reduce stress of dealing with errors and allow the operator to correct them when they appear.

Not having to follow strict typing rules frees to programmer to concentrate on more important tasks. It's much easier to reuse code, because you can do it without having to refactor to allow the reuse.

The error coping strategy in your programming language is important for keeping the errors cheap. And I don't think any programming language is doing it right yet. The problem is that action done during error should be consistent with the action requested. For example addition with wrong types shouldn't result in concatenation.

Overall occurrence of error should stop the unprepared programs until there's a program that has been prepared for it. Such layered design allows reliability in presence of errors where there's human operator intervention on the bottom.

Similar posts