I’ve never seen these applied to a problem other than when one wants to increase the dimension of the matrix by one or vice versa… very eye opening. Will look for applications in my own work now.
Also appreciated the derivation of sorts of the formula, starting with the case where the main matrix is the identity.
One suggestion: when experimenting, monitor numerical precision. This is important if matrices are poorly conditioned or if the inverse is updated over many iterations.
Also possibly of interest are Suitesparse and a Python-based guide that uses it [1,2].
Really nice article!
I recently had a fun discovery that it’s possible to implement a reasonably efficient GPU sprite renderer using linear algebra with sparse arrays + jit in jax. The quick writeup - https://pwhiddy.github.io/more-writing/2022/07/20/Jax-Sprite...
A more general formula is known as the Woodbury matrix identity (https://en.wikipedia.org/wiki/Woodbury_matrix_identity), which comes up quite a lot in numerical algorithms.