Contradiction in divide by zero

When you do multiplication, you imply you're calculating an equation such as:

a*b = c

If we have to multiply, we know the values a and b and we need to know what c is.

There are two ways to construct a division, but they collapse into one if multiplication is commutative. With left division we know the values c, a, and need to know b. With right division we know the values c, b, and need to know a.

Assume this is the case (a):

a * 0 = 0

We won't learn anything new about a by doing division 0 / 0. The equation stays satisfiable but domain of a does not narrow.

Assume this is the case (b):

a * 0 = 5

The division 5 / 0 is a contradiction. Observe that the above equation is unsatisfiable assuming we have an algebra with an axiom x * 0 = 0.

Programming treats the cases (a) and (b) same. You cant compute the result after all.

In the case (a), the solution space for a covers the domain for a. This can be denoted a ∈ A. In the case (b), the solution space for a is empty. This can be denoted a ∈ {} which can reduce to .

These two ideas are total opposite ideas, yet programmers often treat them as the same.

It may be also interesting to consider what happens if we introduce division as the inverse.

a*b = c
a*b/b = c / b   || divide both sides by 'b'
a = c * (1/b)   || simplify and apply identity rule (n = 1n)
a = c * inv(b)  || declare inversion function

Inverse allows precision when multiplication is not commutative (ab ≠ ba). At the same time we seem to learn something new.

inv(0) is a sort of a weird number. 0*inv(0) collapses. It probably means introduction of a fresh variable, otherwise the use of inv(0) is a contradiction.

Oh. I think I've seen this before.

Long time ago I remember reading about an academic who was ridiculed for his idea of a "nullity". I also thought the dude was ridiculous and laughed at the thing. That slashdot story I read from is still present and I managed to find the Wikipedia page. James A.D.W. Anderson's transreal numbers are described and seem to provide some useful ideas that are related. Turns out nullity is a quantity like the 0*inv(0). It's a variable that is always fresh. A bit like Prolog's underscores.

From contradiction, anything follows; If you get a contradiction, you should not trust the results you get. However if there is no contradiction, then you potentially have results that you can use for something.


To see what I mean when I say that these two forms of division by zero are very different, we can illustrate it with the computation of line-line intersection point on a plane.

ax = x1-x3,     ay = y1-y3
bx = x3-x4,     by = y3-y4
cx = x1-x2,     cy = y1-y2
n = ax*by - ay*bx
d = cx*by - cy*bx
t = n / d
px = x1 + t*(x2-x1)
py = y1 + t*(y2-y1)

d depends on the directions of the two lines. People understanding linear algebra know that we're calculating outer products. The equation of n is the sine of the angle between the lines, multiplied by the distances between the points that form the line. m is same, but tells how much apart the defining point of an another line is from our point. By observing at the equation you can note that d = 0 when lines are either degenerate or parallel.

Degeneracy means that either line is compressed into a point. Either one of the lines do not have a length. sin(angle) = 0 means that lines are parallel.

Ok, so if d = 0, and we have proper lines, then n = 0 tells whether the lines overlap and n ≠ 0 means that there is no intersection point.

Why do we have a case with degenerate lines and n = 0? I suppose it means something but I don't know what.

Floating point difficulties with zero

With real numbers we have a problem with inv(0) because you have this going on.

a*b = c+ε

Here ε represents error that occur due to the results of floating point computations being approximations. I'm not actually sure if this is the correct representation for the concept. But the point is, we do not get exact results.

I'm not sure what this means or how it should be treated in respect to this new concept. I probably got to find out the proper way to describe floating point mathematically.


The results of understanding this concept seem to be interesting mathematically. Computationally it is more hairy.

It is common to implement division by zero with types that do not worry about the zero -case. Detection of zero computation prior run is one of those hard things to achieve.

If it actually happens that division by zero occurs, we tend to assume having three choices that are default, nontermination or failure.

By doing any of these three options we seem to throw a huge issue down to the user. It could be that we've got an another billion-dollar mistake right here.