# Flaws in structured programming

Structured programming is an established paradigm. All popular programming languages provide structured control flow constructs.

There's a recurring problem that occurs when writing real-world software. Many believe that any kind of concurrency is inherently difficult, but few realise that structured programming makes it even more difficult than it would have to be.

Structured programming is there to improve clarity and quality of software and in many aspects it's been successful in that. However now there are program representations easier to reason about and their use would result in more reliable software being written.

## Where structured programming came from and why?

Origins of structured programming can be traced back to to a discovery known as Structured program theorem. The conclusion of this theorem is that control-flow graphs consisting of subroutines, condition blocks and loops can compute any computable function. This means that unstructured control-flow that was used prior structured programming is not necessary for representing programs.

Structured programming presents an improvement over unstructured programming due to its clear identification of entry points in routines.

Identifying entry points in the software reveals you what stays same on the repeated runs of the program. It helps you conclude whether or not the program is correct.

This is something so basic that we take it for granted today. Therefore I give an example to help figure out what this means. Here we have a program that is intended to enumerate numbers from 5 to 9. First I present it to you without structure.

``````A1:
``````   i := 5
``````   goto A2
``````A2:
``````   enumerate i
``````   increment i by 1
``````   if i = 10
``````   then goto A2
``````   else goto A3
``````A3:
````   program completed````

(The program is actually not correct. Although I paid attention to it when writing the code, I still mixed up the labels in the condition block. I decided to leave that in because it's an another thing you usually get less wrong with structured programming.)

If `A1` is the only entry point, then this program is correct. However if this was part of a larger program then you would not know whether this program would be entered from `A2` or `A3`, skipping `A1`.

In other hand, if you have the following program:

``````i := 5
``````do
``````    enumerate i
``````    increment i by 1
````until i = 10````

You immediately know that there is no way to enter the loop without setting 'i' to 5. Knowing this helps you conclude that the program is doing what it is intended to.

## Controversy over exit points

Early on, academics argued that structured programming should also allow quick identification of exit points. Pascal programming language is following this discipline.

The opposition against multiple exit points were going over statements such as 'break' and 'return'. 'break' stops the currently running loop and drops out from it. 'return' stops the current procedure and returns from it immediately.

Loosened discipline is easier to learn and occassionally it means for fewer duplicated program lines. However when somebody tells you that things aren't going to work, you'd better listen why.

Recovery and resumption after errors is an important part of correctly written software. To do it you have to track exit points in the software.

Thanks to structured programming, it is very unusual to forget initialization of variables, but it has become a common issue to corrupt the data under processing. Failing reasoning about exit points guarantees that software doesn't recover properly from errors.

## Exception handling ≃ Data corruption

Exception handling brings structured programming to an extreme where every function call is a potential exit point. Ironically the goal there is to get convenience into error recovery. There's some motivation behind this practice though:

1. Things that go wrong keep accumulating as program keeps running.
2. Having to account for errors produce clutter in the code.
3. Error values tend to be just as rich and informative as ordinary values are. This kind of symmetry is ignored by simply returning an error code.
4. Stack traces help with debugging.

Only problem with exception handling is that with fewer exit points there are fewer places where you can recover the state to what it was before the subroutine ran. Exception handling provides 'finally' -block to do this, but their use is inconvenient so they are rarely used.

Generally people try to solve this kind of issues through immutability. They insist that every value should be read-only if it doesn't need to be mutable. While this helps, it isn't a solution to having state that has to change during program's evaluation.

## Stack traces, or, what should be returned depends on context

The best thing in exceptions are the stack traces. When an error occurs it's helpful to know what happened. Stack traces can completely eliminate any guesswork here.

Stack traces record every subroutine call that happened between the point where an error occured and the routine that responded to the error. Usually it is an array of numeric values recorded while unwinding the call stack as a response to the error. This way the trace reveals every entry point that was required to reproduce the error and is an invaluable help in finding out why the program failed.

Despite their simplicity, stack traces involve some performance questions that have caused bickering and complications. Tail call optimizations complicate collection of stack traces and introduce a question of which one to trade off for an another.

If we represent errors as return values, people usually think it means we don't get any backtraces on errors. But should backtraces be related to error handling at all? The only reason they're not always recorded is for a performance reason.

The problem is that when the program runs we don't know whether the stack trace is required or not, but the trace needs to be collected when the program is running.

There would be a way to switch the stack tracing on when it's needed. You can do some inference on the context to decide whether to compute the stack trace or not. As in you can compute it when something is expecting for it. You may even deduce the conditions that are required for reading a stack trace.

When you liberate stack traces from exception handling in this way, you may wonder what other uses could there be for determining what the procedure returns by the context where it's called? Could it be good for performance in general?

## Structured error handling

If you get back to representing errors as return values, something that may seem like a step backwards, you recover an important part of structured programming: The identifiable exit points in a procedure.

Being able to describe the errors as ordinary values may be beneficial. In a well-written software you'd expect to have a structure that describes and documents every possible error the system may produce. This is so you can produce localizations for errors messages.

Having a list of errors that the program may produce may also soothen you if you ever have concerns over the program's function for instance here's what your coffee machine's firmware might contain:

``````data BrewingError:
``````    CleaningDue
``````    GrindingUnitBroken
``````    HeaterBroken
``````    NoWater
``````    OutOfBeans
``````    ReplaceClockBattery
````    ThermistorDisconnected````

Structured approach to error handling could be a good thing. We already practice that with control flow and ordinary data structures.

But if you transmit errors as return values, would it mean you end up having it like Go programming language has? Error handling galore sprinkled around the code like little pellets. Hamster wheel programming at its finest.

We can reintroduce the problems with exception handling by using a syntactic solution that abstracts the structured programming. The syntactic solution is referred to as monads. Monads can be thought as a form of an abstraction for sequencing.

Monads introduce two functions that we can close into an interface:

``````Monad M
``````    unit : a → M a
````    bind : M a → (a → M b) → M b````

The arrow symbols here should be understood as functions. `unit` is a function takes some value and produces a monad. `bind` is a function that takes a monad and a function that produces a monad, then produces an another monad.

`bind` is aliased to `>>=` for convenience. For the structure to be a monad, the 'unit' must be a left and right identity for the 'bind', and 'bind' itself must be associative. Roughly this means that 'unit' must not do anything interesting and chaining 'bind' must form a proper sequence. Mathematically you can describe the rules like this:

``````left identity:      (unit x >>= f) = f x
``````
``````right identity:     (a >>= unit) = a
``````
``````associativity:     ( a >>= x ↦ f x >>= g)
````                 = ((a >>=     f)  >>= g)````

These structures allow varying the meaning that procedure contains. It means you can build your own convention over how the procedure should operate. It's still possible to spoil it up, but at least now the language doesn't have a systematic flawed way to carry out the task of error handling.

For example, the following implementation would let you attach an error handling routine:

``````data Either a b:
``````    Left a
``````    Right b
``````
``````Monad (Either e):
``````  unit = Right
``````  bind (Right x) f = f x
````  bind (Left y)  f = Left y````

This routine would keep working until there's some flaw detected. You could use the type to identify which convention the procedure is using.

``````isOperational : Either BrewingError SystemParameters
``````procedure isOperational()
``````    checkMaintentanceLog;
``````    testGrindingUnit;
``````    testMoisture;
``````    etc...
````end procedure````

And use of these structures wouldn't be too bad, you could distinguish the places where you actually have to do something state-sensitive or care about error handling:

``````case isOperational() of
``````  Left error:
``````    reportError error
``````  Right parameters:
````    runMenu parameters````

Supporting monads in a language is quite simple, especially if you adhere to having control flow constructs that provide the single-entry/single-exit principle untampered. The procedure is treated like an one big equation and the sequence is formed by constructing it from 'bind' -operations.

## A cousin of structured programming

So far I've presented ways how structured programming languages could be improved to be easier to use while not discarding the qualities there is when it comes to being able to reason about programs.

There is a thing I stumbled upon when studying intermediate representations to make it easier to do inference over programs. It appeared in Jorge Navas' paper "Horn-Clauses as an Intermediate Presentation for Program Analysis and Transformation"

Jorge presents a horn-clause-based program presentation as a way to analyse and transform programs. The presentation makes many forms of errors very obvious outright.

The idea is that you'd have subroutines with inputs and outputs from the program. Both inputs and outputs are presented with variables so that it stays symmetric. The sequencing stays but condition blocks and looping is removed.

Loops are replaced with subroutine calls. Condition blocks are replaced with guard clauses. Guard clauses match such that the program is completely defined.

Consider the structured programming fizz buzz:

``````fizz_buzz(count):
``````    i := 1;
``````    while i < count do
``````        by_3 := (i % 3 == 0);
``````        by_5 := (i % 5 == 0);
``````        if by_3 and by_5 then
``````            print "fizz buzz"
``````        else if by_3 then
``````            print "fizz"
``````        else if by_5 then
``````            print "buzz"
``````        else
``````            print "%s" i;
````        i := i + 1````

The horn-clause-based low-level representation for this would be:

``````fizz_buzz(count):
``````    fizz_buzz(1, count).
``````
``````fizz_buzz(index, count):
``````    index < count;
``````    i % 3 = 0;
``````    i % 5 = 0;
``````    print "fizz buzz";
``````    fizz_buzz(index+1, count)
``````
``````fizz_buzz(index, count):
``````    index < count;
``````    i % 3 = 0;
``````    i % 5 ≠ 0;
``````    print "fizz";
``````    fizz_buzz(index+1, count)
``````
``````fizz_buzz(index, count):
``````    index < count;
``````    i % 3 ≠ 0;
``````    i % 5 = 0;
``````    print "buzz";
``````    fizz_buzz(index+1, count)
``````
``````fizz_buzz(index, count):
``````    index < count;
``````    i % 3 ≠ 0;
``````    i % 5 ≠ 0;
``````    print "%s" index;
``````    fizz_buzz(index+1, count)
``````
``````fizz_buzz(index, count):
````    ! index < count.````

At first sight the presentation is so redundant that it can't be better. If you translate a lot of programs into this form and read them, you will eventually notice that you're able to see if program has something wrong just by glancing them.

I think what is happening there is that this program presentation provides you with more static information about the program state and clearly associate the actions made in the program to that state.

Although this is not a big enough difference that you'd be motivated to outright switch away from structured programming in favor of this, it clues that logic programming could yield better programs.

## Program counter is an implementation detail

Sequencing is a complicated thing to emulate if the hardware doesn't exactly fit with what the programming environment strives to provide.

It is starting to stand in the way of writing software. For example graphics hardware today is different enough that it doesn't run javascript or webassembly and you need some slight variant language to program them. Languages such as Rust or Javascript just assume too much about the platform in order to run on a graphics card.

I think it's the program counter and assumptions about program's state that are catching up here.

## Async/Sync should not be exposed to a programmer

Structured programming shapes badly for modeling software with responsibilities that diverge.

Lets examine a program that waits for a second, then writes a message, this time written in Javascript:

``setTimeout(function(){ console.log("hello") }, 1000)``

This is an asynchronous way to get a program to wait for a second before it does an action. It is achieved through running an eventloop that is entered when the script exits.

There is also a synchronous way to wait a second. We can illustrate this in the C programming language with the POSIX interface that provides a synchronous sleep function:

``````#include <stdio.h>
``````#include <unistd.h>
``````
``````int main(void) {
``````  sleep(1);
``````  printf("hello\n");
``````  return 0;
````}````

As an interface designer for a structured programming language you have to decide whether to provide synchronous or asynchronous I/O. The JS picked up asynchronous I/O because it is easier to implement in a way that it works for writing interactive programs.

Neither choice has been satisfying, and you may have stumbled across problems that arise because some application interface resorts to synchronous behavior and other interface tries to be asynchronous.

Technically it's possible to "indistinguish" asynchronity, through first-class continuations, but at the same time we lose all guarantees about single entry/single exit points.

How about we ignore the whole async/sync -problem like we did with multiple exit points? Ok.. Then I could tell you about message boxes, semaphores, mutexes, locks, write and read barriers, process spawning&forking, single-program-multiple-data. When we get to distinguish programs by evaluation order that is applied on them we get all of these details and all of them are very system-specific.

If we treat structured programming as a monad, annotate the side effects into the types and then examine the intuitionistic type signatures, we see the following difference between sync/async functions:

``````timeout : Integer → IO () → IO ()
````sleep : Integer → IO ()````

The second argument into the timeout is significant here and we cannot leave it out. This would propose the async/sync difference cannot be abstracted away.

We can change our logic though. We can rewrite the type signatures in linear logic and then compare them:

``````timeout : Integer ⊸ (1 ⊸ 1) ⊸ 1
````sleep   : Integer ⊸ 1````

The "lollipop" is a shorthand that we can rewrite as:

``````timeout : ~Integer ⅋ (~1 ⊗ 1) ⅋ 1
````sleep   : ~Integer ⅋ 1````

In linear logic, the 1 is an unit of ⊗ and ~1 is an unit of ⅋ . It would be described like this:

``````unit_conj :  1 ⊗ a = a
````unit_disj : ~1 ⅋ a = a````

By applying these rules we can rewrite `timeout` to `sleep` and vice-versa. This means they are equivalent under linear logic.

However this cannot be done if we stick to structured programming. It cannot abstract away asynchronity without losing properties.

## Modeling behaviors with linear logic

Software designers do not care about sequencing most of the time. They care about how their program behaves.

For example consider a web server that implements a chat. We'd like things to happen when:

1. There's a request to set up server.
2. There's a request to listen at a port.
3. A connection request is made to a port, that then needs to be either rejected or accepted and then we get a connection that we have to do something with.
4. A message is received through a connection.
5. A connection disconnects.
6. There's a request to shut down the server.

Chat program would have to relay each message to every connection. Connections would have to be collected into a channel list. On disconnection a connection should be removed from the channel list.

The behaviors that were described are related together and this can be troublesome to model. For example the server must shut down all the connections it has, and it cannot be shutdown before it's started.

Structured programming makes it a complicated task to implement such a server. The programming task usually reduces into building some sort of a state machine. You also get the question of whether to let one process to handle every connection or if there should be a process for each.

It is possible to model all of this behavior in linear logic. In this kind of logic, you may be obliged to apply.

For example the request to disconnect a client could be represented with a type such as `Connection client ⊸ 1`. You cannot discard this value in other way than applying it to a connection that matches some specific client. In other words the program doesn't type check if it doesn't disconnect a client when demanded.

If you're interested, here are some other potential modelings for concepts that might appear in a chat program:

``````start      : 1 ⊸ Server s
``````listen     : Server s ⊸ Server s ⊗ Listen port s
``````connection : Listen port s
``````  ⊸ Listen port s
``````    ⊗ Request k
``````    ⊗ (Reject k & Accept k ⊗ Connection k s)
``````receive    : Connection k s ⊸ Connection k s ⊗ Message k
``````disconnect : Connection k s ⊸ 1
````stop       : Server s ⊸ 1````

These are not modeling interactions between the events. It is expressive enough to do that though.

To manage a connection list and handle the logic of the chat program, we might model it with a connection list which consumes messages:

``````List (∃k. Connection k s) ⊗ Message j
````    ⊸ List (∃k. Connection k s)````

On a disconnection it would consume a disconnection request:

``````(Connection j s ⊸ 1)
``````    ⊸ List (∃k. Connection k s)
````    ⊸ List (∃k. Connection k s)````

There are ways to model program interactions without needing the concept of a process and sequencing that would otherwise form the foundation for structured programming.

Linear logic forms a calculus a bit like those used as foundations for ML, Haskell or Idris. It also provides a normalization step that can be used to evaluate it. It's not the only form of logic capable of modeling interactions, but at the same time it can represent these kind of ideas efficiently.

## A little crash course to linear logic

Consider that you'd have list of values with each value labeled by its type. This may be a familiar setting if you've studied typed lambda calculus. We would display them like this and call them contexts:

``⊢ a,b,c...``

For example, you might have a context with two numbers and a string:

``⊢ int, int, string``

Now this system seem to become able of modeling interactions if we require that you cannot outright add and remove values from a context. For example if you want to go to `⊢ int, string`, you have to explicitly say that you can erase integers and tell which integer you erased.

You can pair items within a context by using a disjunctive operation:

``````⊢ a, b
``````
````⊢ a ⅋ b````

If you have two different contexts, you can combine them with a conjuctive operation:

``````⊢ a       ⊢ b
``````
````⊢ a ⊗ b````

If you keep track of what you've combined and how, you can avoid or at least isolate scenarios that'd lead to deadlocks. This in turn means that values and demands for values can be treated similarly.

Now obviously there are some values you can always destroy or create. Integers and strings would be like that. If you create an empty context and you can produce the value from there, then you can always create such value and therefore you're allowed to copy it. That the value can be created from empty context can be denoted with an exclamation mark.

``⊢ !a``

You can erase and duplicate such values for free. There's a dual for this concept marked with ?a that works in inverse. If the value can be reduced into a context with just ~1 in it, then it can be marked with a question mark:

``⊢ ?a``

These rules form classical linear logic. The first rules are multiplicative operators and the later rules are exponential operators. I will still introduce additives to you.

Now just like in typed lambda calculus, there's a sum type in here. It can be used to make two different contexts look like same:

``````⊢ a         ⊢ b
````⊢ a ⊕ b     ⊢ a ⊕ b````

Likewise we can combine two contexts when they share something.

``````⊢ a,c       ⊢ b,c
````⊢ a&b, c````

To show that this forms a logic, I present you how it can be used to refute a proposition.

## A little example

Say we have a string that consists of pairs of two different letters. I state such string cannot be a palindrome. We would refute this:

``⊢ (a ≠ b) ⊗ paired s a b ⊗ palindrome s``

What is a palindrome?

``````palindrome ""
````palindrome s ⊸ palindrome (char a ++ s ++ char a)````

We define empty string as a palindrome, and then any palindrome string where we prepend and append a character is also a palindrome. For example: "", "bb" "abba" are palindromes.

We miss palindromes with odd number of letters in them though :) I missed this one without opening up what the expression means. That missing base doesn't change the conclusion and adds one case more so I don't correct it here, though our definition of 'palindrome' here excludes words such as "madam".

You may wonder how formal logic deals with that kind of mistakes? One thing is to verbally explain what you mean like I did there. You likely note that kind of mistakes when you attempt to prove properties about palindromes and work with the definition. It'd be important to fix it if we were doing that.

Ok. Palindromes are defined, how about our paired string?

``````paired (char a ++ char b) a b
````paired s a b ⊸ paired (s ++ char a ++ char b)````

Likewise we take a base case that is two letters together, then we treat any paired string with paired characters as a paired string as well. Eg. "xy", "xyxy", "xyxyxy"... are paired strings here. Empty string is not treated as a paired string though.

So now we know what palindrome and paired means here, the base cases and induction cases can be glued together with the `⊕` -operator.

``````⊢ (a ≠ b)
``````⊗ ((s = "") ⊕ ∃q.(s = q ++ char a ++ char b) ⊗ paired q)
````⊗ ((s = "") ⊕ ∃q.(∃a. s = char a ++ q ++ char a) ⊗ palindrome q)````

The symbol `∃` stands for "there is". We can break this down to:

``````⊢ (a ≠ b) ⊗ (s = char a ++ char b) ⊗ (s = "")
``````⊢ (a ≠ b) ⊗ (s = char a ++ char b) ⊗ ∃q.(∃a. s = char a ++ q ++ char a) ⊗ palindrome q
``````⊢ (a ≠ b) ⊗ (∃q.(s = q ++ char a ++ char b) ⊗ paired q) ⊗ (s = "")
``````⊢ (a ≠ b) ⊗ (s = q ++ char a ++ char b) ⊗ paired q
````⊗ ∃q.(∃a. s = char a ++ q ++ char a) ⊗ palindrome q````

Once we refute every single one of these expressions, we've proven that the proposition does not have a proof.

The first expression is immediately refuted:

``````⊢ (a ≠ b) ⊗ 0
````─────────────````

Expand the palindrome in second expression and we learn q must be "". This is a long step but you know it can be done.

``⊢ (a ≠ b) ⊗ (s = char a ++ char b) ⊗ (∃a. s = char a ++ char a)``

From here we can conclude:

``⊢ (a ≠ b) ⊗ char a = char a ⊗ char b = char a``

Char is injective, meaning precisely that `char a = char b` leads to `a = b`, so we can conclude and refute the second expression.

``````⊢ (a ≠ b) ⊗ a = a ⊗ (b = a)
``````───────────────────────────
``````⊢ 1 ⊗ ~1
````────────````

The third expression is again, immediately refuted:

``````⊢ (a ≠ b) ⊗ ∃q.paired q ⊗ 0
````───────────────────────────````

We can use the rule that length of 's' is nonzero if it is constructed by concatenating non-empty strings.

The fourth expression requires something else. We have to expand the definition of palindrome.

``````⊢ (a ≠ b) ⊗ (∃q.(s = q ++ char a ++ char b) ⊗ paired q)
``````⊗ (s = char a ++ char a)
``````
``````⊢ (a ≠ b) ⊗ (∃q.(s = q ++ char a ++ char b) ⊗ paired q)
````⊗ ∃q.(∃a,b. s = char a ++ char b ++ q ++ char b ++ char a) ⊗ palindrome q````

To both of these, we can apply a theorem that if we know the suffix of a string, then we know what the last character is. We can conclude it must be that `(char b = char a)`. We can again use injectivity of char to conclude `b = a`.

The proposition has been refuted because it leads to a contradiction or is outright false in every way that it can be interpreted.

In classical linear logic every proposition has a dual. We can invert the proposition and state that the following has been proven:

``⊢ (a = b) ⅋ ~paired s a b ⅋ ~palindrome s``

It roughly means same as all of these together:

``````⊢ (a ≠ b) ⊸ paired s a b ⊸ ~palindrome s
``````⊢ (a ≠ b) ⊸ palindrome s ⊸ ~paired s a b
````⊢ paired s a b ⊸ palindrome s ⊸ (a = b)````

It would be possible to do this kind of a refutation with type theory as well and it wouldn't be too different from this.

## Conclusion

Here I tried to replicate thoughts that I went through year and half ago when I stopped working on my own programming language. It slowly occured to me that structured programming has this flaw. I wanted to present these ideas thinking they may influence people, for example to give Idris a try.

I've been programming in Idris lately but I'm not sure if I'll stick to it. Still in that language you're able to construct proofs similar to that proof about palindrome.

It needs a change of viewpoint to work for you though. In Idris you can't think of types as hardware representations for values or memory layouts. Even if you construct a natural number as `(S (S (S (S Z))))`, it's not the intended memory representation of this value, instead it's a mathematical representation for a natural number. You work with those mathematical representations rather than computer numbers.