Geometric algebra (duals) (cont)
Last time I wondered about what a multivector should mean.
Well.. The productive way to examine it is likely through understanding
what kind of transformations geometric product and addition mean.
If you have a multivector and it's x
-unit apart
from being a result you can use, then the multivector is (a ± x)
.
I've read about this algebra from many sources and they're slightly different in representation to a point that I'm not sure they describe the same algebra.
Geometric algebra comes with an idea that every n-vector has a dual,
eg. In R(3,0,0)
it'd be:
x <-> y∧z
y <-> x∧z
z <-> x∧y
The dual vector is treated as a vector that is a wedge product of everything but the vector that it's a dual of. The dual vectors also have dual notions of operations. We have a dual for the wedge product, called regressive product.
There would seem to be an easy metric-independent way to obtain the duals. This comes up if you read up on projective geometry from Charles G. Gunn's papers.
Find elements that wedge to a pseudoscalar together, eg. with 2D coordinates they are:
1 ∧ I = I
e0 ∧ e1 = I
e1 ∧ -e0 = I
I ∧ 1 = I
To do the transform substitute every element on the left with the right element.
dual (a + b*e0 + c*e1 + dI)
= d - c*e0 + b*e1 + aI
If you do the dual transform twice, you get back the same or negative vector, depending on coordinates you picked.
dual (dual a) = ±a
I find this a bit dissatisfying because this way to define duals has a clear dependency on the coordinates.
I mentioned the regressive product. It'd seem to be an operation that behaves like the wedge product but with duals. Eg. If wedge product does this:
e0 ∧ e01 = 0
e0 ∧ e1 = e01
e1 ∧ e0 = -e01
Then the regressive product does this:
E0 ∨ E01 = 0
E0 ∨ E1 = E01
E1 ∨ E0 = -E01
So in short, if the same elements are missing, the regressive product is zero. Otherwise we produce an object that combines the missing elements.
E01
would be an object that combines everything else except e0
or e1
.
Lets consider we'd take our coordinate-space-sized fragment of
the antivector E0
, E1
, in 2D space.
Highest-grade element in the space is the e0∧e1
,
if our antivector E0
doesn't contain e0
,
then it must project down to e1
and E1
must project down to e0
.
E01
can't contain either e0
or e1
vector so it must map down to 1
.
Hmm..
1 ∧ 1 = 1
e0 ∧ e0 = 0
e1 ∧ e1 = 0
e0 ∧ e1 = e01
e1 ∧ e0 = -e01
e01 ∧ e01 = 0
dual 1 ∨ dual 1 = dual 1
E0 ∨ E0 = 0
E1 ∨ E1 = 0
E0 ∨ E1 = E01
E1 ∨ E0 = -E01
E01 ∨ E01 = 0
Ok, what happens if the E0
is replaced with e1
, etc.. ?
e01 ∨ e01 = e01
e1 ∨ e1 = 0
e0 ∨ e0 = 0
e1 ∨ e0 = 1
e0 ∨ e1 = -1
1 ∨ 1 = 0
Lets represent this with multiplication tables to be clear.
∧ 1 e0 e1 e01
+----------------
1 | 1 e0 e1 e01
e0 | e0 0 e01 0
e1 | e1 -e01 0 0
e01| e01 0 0 0
∨ 1 E0 E1 E01
+----------------
1 | 1 E0 E1 E01
E0 | E0 0 E01 0
E1 | E1 -E01 0 0
E01| E01 0 0 0
∨ e01 e1 e0 1
+----------------
e01| e01 e1 e0 1
e1 | e1 0 1 0
e0 | e0 -1 0 0
1 | 1 0 0 0
Now if I reorder the table for the regressive product.
∨ 1 e0 e1 e01
+-----------------
1 | 0 0 0 1
e0 | 0 0 -1 e0
e1 | 0 1 0 e1
e01| 1 e0 e1 e01
The regressive product is defined as a wedge done in the dual space.
a ∨ b = dual(dual a ∧ dual b)
.
I've seen two ways to compute the duals. One is the one I described earlier, the another is multiplication by pseudoscalar.
x∧y=I
---+-----------
1 | e01
e0 | e1
e1 |-e0
e01| 1
*I
---+-----------
1 | e01
e0 | e1
e1 |-e0
e01|-1
But in the second case we "backpedal" from the dual space, the dual-transform back from dual space is a division with pseudoscalar. The table to do this is easy to derive:
I*(I^-1) = 1, e01*e10 = 1
*(I^-1)
---+-----------
1 |-e01
e0 |-e1
e1 | e0
e01| 1
Now if I build the same regressive product by transforming the wedge product from the dual space and then "backpedal".
dual_ (dual a ∧ dual b) (With I,(I^-1))
∨ 1 e0 e1 e01
+-----------------
1| 0 0 0 -1
e0| 0 0 1 -e0
e1| 0 -1 0 -e1
e01|-1 -e0 -e1 -e01
I'm getting different behavior on the duals, lets try the same without backpedaling.
dual (dual a ∧ dual b) (With I)
∨ 1 e0 e1 e01
+-----------------
1 | 0 0 0 1
e0 | 0 0 -1 e0
e1 | 0 1 0 e1
e01| 1 e0 e1 e01
Turns out to be the one that we got by examining the rules that regressive product should follow.
Ok, how about we use the dual I described earlier? No backpedaling.
dual (a + b*e0 + c*e1 + dI)
= d - c*e0 + b*e1 + aI
∨ 1 e0 e1 e01
+----------------
1 |0 0 0 1
e0 |0 0 1 -e0
e1 |0 -1 0 -e1
e01|1 -e0 -e1 e01
There's still a possibility that I mixed the things up, wikipedia page proposes the regressive product is set up in a different way.
dual_ (dual a ∧ dual b) (With (I^-1),I)
∨ 1 e0 e1 e01
+----------------
1 | 0 0 0 1
e0 | 0 0 -1 e0
e1 | 0 1 0 e1
e01| 1 e0 e1 e01
Same table as without going backwards. But matches with the table I got by reasoning.
So this is something I haven't understood yet. What's the correct dual and does it really change when doing projective geometry? Or did I understood Gunn's papers wrong?
Motivation
If you get the regressive product right, it's likely just as interesting as the wedge product.
The wedge can be understood as combination of geometric objects. vector∧vector produces an area of the parallelogram oriented across those vectors. vector∧area produces an oriented volume of a parallelepiped.
Regressive product in other hand produces intersection of objects.
If you'd want the intersection between two planes.
0.2x + 0.4y + 0.5z = 0
0.3x + 0.9y + 0.1z = 0
You recognize the dot product there. The plane could be thought as a geometric object that's a dual of a vector that's perpendicular to the plane.
So we could think that we have:
plane1: 0.2X + 0.4Y + 0.5Z
plane2: 0.3X + 0.9Y + 0.1Z
And if we take a wedge product.
(0.2X + 0.4Y + 0.5Z) ∨ (0.3X + 0.9Y + 0.1Z)
(0.2X ∨ 0.3X + 0.2X ∨ 0.9Y + 0.2X ∨ 0.1Z)
(0.4Y ∨ 0.3X + 0.4Y ∨ 0.9Y + 0.4Y ∨ 0.1Z)
(0.5Z ∨ 0.3X + 0.5Z ∨ 0.9Y + 0.5Z ∨ 0.1Z)
( + 0.2 * 0.9XY - 0.2 * 0.1ZX)
(- 0.4 * 0.3XY + 0.4 * 0.1YZ)
(+ 0.5 * 0.3ZX - 0.5 * 0.9YZ )
+(0.2 * 0.9 - 0.4 * 0.3)XY
+(0.4 * 0.1 - 0.5 * 0.9)YZ
+(0.5 * 0.3 - 0.2 * 0.1)ZX
Recall the rule that we're working with inverse concepts.
The "antiplane" XY
is a geometric object missing x
and y
directions.
+(0.4 * 0.1 - 0.5 * 0.9)YZ ===> x
+(0.5 * 0.3 - 0.2 * 0.1)ZX ===> y
+(0.2 * 0.9 - 0.4 * 0.3)XY ===> z
Plotting the values in you get the intersection between planes, though you can graphically reason this as well because we just did a cross product between plane normal vectors and you know what that means.
Another example could be given from scaling a plane. Lets say you construct a plane from two vectors.
a, b, a∧b
Now you'd like to scale the vectors in some direction. Whatever that operation is this transform should keep the vectors in the plane, and we can represent it like this:
(s a) ∧ (s b) = s (a∧b)
We'd like about this kind of an object then.
s (a*x) = s_x * a * x
s (a*y) = s_y * a * y
s (a*z) = s_z * a * z
s (a*(x∧y)) = s_x * s_y * a * (x ∧ y)
...
That obviously works and was quite easy to retrieve. Another option is to scale parallel component and combine it with orthogonal component.
(k(dot a n) + (a ∧ n)) * (inv n)
The inner products in geometric algebra seem to have something else going on with them. I mostly don't understand them yet.
You've been able to do reasoning with vector algebra already so that's not anything new to geometric algebra. Though all this stuff is polymorphic. The scaling is clearly defined for all multivectors. The Wedge/Regression product seem to do the same things with all multivectors. Reflections/Transforms/Inverses seem to be polymorphic.
That has been interesting enough to detract me. In few days there's the GAME2020 event in Kotrijk Belgium. I won't be there but eventually I can watch the videos and hopefully that clears up something.