Into Logic Programming

There is no end for correspondences or coincidences leading to think that we should do programming in logic.

Type systems

The inferencers used with Hindley-Milner type system consist of rewriting rules that use the most general unifier and generalization to deduce types for untyped programs.

The rewriting rules translate into horn clauses in a straightforward manner. For example, here's a horn clause for function application:

∀ e a u t. TypeOf(e, (->)(u, t)) ∧ TypeOf(a, u) ⊃ TypeOf(app(e, a), t)

Types themselves can be represented as terms or propositions.

Grammars

Horn clauses can be used to represent context-free grammars. When the grammars are treated this way SLD-resolution behaves like a top-down parser.

Pattern matching

Pattern matching seems to directly originate from the use of most general unifier to get the logic clauses to match.

Most functional programming languages also introduce pattern matching features, but the representation is usually simplified away before running the program.

Functions returning multiple values

In languages such as Icon, functions can return multiple results. This eventually ended up into Python in a restricted format, and then spread into other languages.

If you combine generators with logic variables and unification, you got yourself an implementation of Prolog.

Horn clauses as IR

Horn-clauses seem to provide a good foundation for representation of programs, even if those programs were procedural and deterministic. There's a separate blogpost about that subject for those who want to read about that.

Existential quantifier

There's an old paper "Abstract Types Have Existential Type". The idea is that existential quantifiers give a rise for abstract datatypes.

The connection is still somewhat loose in my mind, but atoms and compound terms could be intepreted as skolemized existential quantifiers.

Final push over

Clauses written in plain first order logic would seem to form a foundation for programming that can cover most strongest concepts present in existing programming languages with a set of relatively simple ideas.

When you get to learn a language such as Prolog is, the first impression is that you'd need to learn a completely different way to program in order to use it effectively.

The main concept in Prolog is that you describe the logic in your program and then run a query for a theorem proving algorithm to induce the evaluation of the program. This causes the logic and control in the program to separate.

There is a learning curve to this, but it is not particularly high for people who already know how to program in several languages. You can start by writing the program like you'd write it in your favorite language, just using prolog constructs. Don't worry that it could fail, backtrack and try an another combination. Once you have it down study how it behaves and what it can and can't do, what kind of errors it produces and then refine until you're satisfied.

Once I tried to do this and saw what the results were, I think I was confirmed that this is the way to writing good computer programs.

Similar posts