Speeding-Up Backpropagation of Gradients Through the Kalman Filter via Closed-Form Expressions
Résumé
In this article, we provide novel closed-form expressions enabling differentiation of any scalar function of the Kalman filter's outputs with respect to all its tuning parameters and to the measurements. The approach differs from the previous well-known sensitivity equations in that it is based on a backward (matrix) gradient calculation, which leads to drastic reductions in the overall computational cost. It is our hope that practitioners seeking numerical efficiency and reliability will benefit from the concise and exact equations derived in this article and the methods that build upon them. They may notably lead to speed-ups when interfacing a neural network with a Kalman filter.