Kalman filtering algorithm¶
The Kalman filtering is an algorithm for exact Bayesian filtering for linear-Gaussian state space models. Where we are interested in the marginal posterior at time t:
Since everything is Gaussian we can perfrom the predict and update step in closed form, and the resulting algorithm is the analog of the HMM filter.
Prediction step¶
Measurement step¶
We compute using Bayes rule:
This is just the product of the prediction step and the actual observation.
This is given by: $\( p(z_t|y_{1:t}, u_t) = \mathcal{N}(z_t|\mu_t, \Sigma_t) \\ \mu_t = \mu_{t|t-1}+ K_t r_t \\ \Sigma_t = (I - K_tC_t) \Sigma_{t|t-1} \)$
\(r_t\) is the residual or inovation, given by the difference between our predicted observation and the actual observation:
\(K_t\) is the Kalman gain matrix given by:
\(\delta_t \sim N(0, R_t)\) is the observation noise term which is independent of all other noise sources.
We can also use the matrix inversion lehma, and we can express the Kalman gain matrix as:
Lets dig into the mean update:
This says that the new mean is the old mean plus a correction factor (\(K_t\) times the error signal \(r_t\)). The amount of weight placed on the error signal depends on the Kalman gain matrix. If \(C_t = I\) then \(K_t = \Sigma_{t| t-1} S_t^{-1}\), wich is the ration between the covariance of the prior and the covariance of the measurement error. If we have a strong prior \(|K_t|\) will be small and we will place little weight on the correction term, and vice versa.
Posterior predictive¶
The one step-ahead posterior preditive density for the observations can be computed as follows:
Computational issues¶
There is an matrix inversion which takes \(O(|y_t|^3)\) and a matrix-matrix multiply to compute \(\Sigma_t\) which takes \(O(|z_t|^2)\)