Quantcast
Channel: Algorithms – Ofek's Visual C++ stuff
Viewing all articles
Browse latest Browse all 4

Geometric Interpretation of a 3D Matrix Inverse

$
0
0

I work a lot with 3D calculations, and every so often a non trivial 3D tidbit comes along. Some of these might be of use to others – and so, by the power vested in me as absolute monarch of this blog, I hereby extend its scope to some light 3D geometry. I’ll try to keep posts in this category less rigorous and yet more verbose than regular math write-ups.

Take a 3×3 matrix that you wish to invert, say M. Think of its columns as 3-dimensional vectors, A, B & C:

image

Now take it’s sought-after inverse, M-1, and think of its rows as 3D vectors, say v, u & w. That means essentially:

image

Next focus just on v, M-1 ‘s first row. What can be said of it, in terms of A, B & C?  Looking in the first row of the multiplication result – the identity matrix – we see:

image

Which means in particular that v is orthogonal to both B and C. Assuming B and C aren’t co-linear (otherwise M wouldn’t be invertible in the first place) there is but a single direction in 3D space which is perpendicular to both, and it can be written as B×C  – vector-product or cross-product of B and C.  Hence v must be a multiplication by some scalar – say α – of this direction:image

To deduce α remember the v must be normalized so that its dot product with A gives 1.  And so:image

The triple product in the denominator, A∙(B×C), should look familiar: that is in fact det(M) – the determinant of the original matrix. Had we inverted M with a more traditional apparatus, say Kramer’s rule, we would have divided by this determinant directly.

Naturally similar expressions are obtained for the other rows, u & w :

image

All the denominators are in fact equal, to each other and to det(M).

Why all the hassle?

First, for the fun of it. Personally I find it much easier to understand – and thus remember – geometric stories than algebraic ones.

Second, this formulation exposes several optimization opportunities.

  1. After computing B×C you can obtain the first of (and so all of) the denominators, by simply taking a dot product with A.
  2. If you need just a single row of the inverse matrix, you can calculate it directly – without having to invert the entire matrix.
    This is not as far fetched as it might seem: say you formulate a 3×3 linear equation set, but you’re actually interested only in the 1st solution coordinate:imageJust take the 1st row of M’s inverse, as outlined above, and dot-product it with b:
    image

Third, using analytical expressions as above for solving linear equations is generally preferable to numeric solvers. For matrices as small as 3×3, solving numerically would probably be a bad idea anyway – even traditional, tedious inverses (with adjoint matrices and all) would be preferable to numeric solutions.

BTW, higher dimensional analogues do exist – and are as easy to derive – but the main added value, namely direct geometric insight, is lost beyond three dimensions.


Filed under: 3D Geometry, Algorithms

Viewing all articles
Browse latest Browse all 4

Trending Articles