I work a lot with 3D calculations, and every so often a non trivial 3D tidbit comes along. Some of these might be of use to others – and so, by the power vested in me as absolute monarch of this blog, I hereby extend its scope to some light 3D geometry. I’ll try to keep posts in this category less rigorous and yet more verbose than regular math write-ups.
Take a 3×3 matrix that you wish to invert, say M. Think of its columns as 3-dimensional vectors, A, B & C:
Now take it’s sought-after inverse, M-1, and think of its rows as 3D vectors, say v, u & w. That means essentially:
Next focus just on v, M-1 ‘s first row. What can be said of it, in terms of A, B & C? Looking in the first row of the multiplication result – the identity matrix – we see:
Which means in particular that v is orthogonal to both B and C. Assuming B and C aren’t co-linear (otherwise M wouldn’t be invertible in the first place) there is but a single direction in 3D space which is perpendicular to both, and it can be written as B×C – vector-product or cross-product of B and C. Hence v must be a multiplication by some scalar – say α – of this direction:
To deduce α remember the v must be normalized so that its dot product with A gives 1. And so:
The triple product in the denominator, A∙(B×C), should look familiar: that is in fact det(M) – the determinant of the original matrix. Had we inverted M with a more traditional apparatus, say Kramer’s rule, we would have divided by this determinant directly.
Naturally similar expressions are obtained for the other rows, u & w :
All the denominators are in fact equal, to each other and to det(M).
Why all the hassle?
First, for the fun of it. Personally I find it much easier to understand – and thus remember – geometric stories than algebraic ones.
Second, this formulation exposes several optimization opportunities.
- After computing B×C you can obtain the first of (and so all of) the denominators, by simply taking a dot product with A.
- If you need just a single row of the inverse matrix, you can calculate it directly – without having to invert the entire matrix.
This is not as far fetched as it might seem: say you formulate a 3×3 linear equation set, but you’re actually interested only in the 1st solution coordinate:Just take the 1st row of M’s inverse, as outlined above, and dot-product it with b:
Third, using analytical expressions as above for solving linear equations is generally preferable to numeric solvers. For matrices as small as 3×3, solving numerically would probably be a bad idea anyway – even traditional, tedious inverses (with adjoint matrices and all) would be preferable to numeric solutions.
BTW, higher dimensional analogues do exist – and are as easy to derive – but the main added value, namely direct geometric insight, is lost beyond three dimensions.
Filed under: 3D Geometry, Algorithms