2x1=10

because numbers are people, too
Persönliches
Fotografie
Programmierung
    • Regular Kalman Filter (almost) super-quick Reference

      To make long things short, here’s the com­plete Mat­lab code.

      % State estimations
      x       state vector                  (M x 1)
      A       state transition matrix       (M x M)
      P       state covariance matrix       (M x M)
      
      % Input / control data
      u       input vector                  (N x 1)
      B       input transition matrix       (M x N)
      Q       input noise covariance matrix (N x N)
      
      % Observations
      z       observation vector            (Z x 1)
      H       state-to-observation matrix   (Z x M)
      R       observation noise covariance  (Z x Z)
      
      % tuning
      lambda  tuning parameter              (scalar)
      
      function [x, P] = kf_predict (x, A, P, lambda , u, B, Q)
        x = A*x + B*u; % a priori state prediction
        P = A*P*A' * 1/( lambda ^2) + B*Q*B'; % a priori covariance
      end
      
      function [x, P] = kf_update (x, z, P, H, R)
        y = z - H*x; % measurement residuals ("innovation")
        S = H*P*H' + R; % residual (innovation) covariance
        K = P*H' / S; % Kalman gain
        x = x + K*y; % a posteriori state prediction
        P = (eye(size(P)) - K*H)*P; % a posteriori covariance matrix
      end
      

      State-space system description

      A dis­crete state-space sys­tem with the state vec­tor \(\vec{x}_k\) at time index \(k\) as well as the con­trol input \(\vec{u}_k\) to deter­mine the cur­rent out­put \(\vec{y}_k\) of the sys­tem at time index \(k\) , as well as the next state \(\vec{x}_{k+1}\) at time index \(k+1\) can be writ­ten as fol­lows:


      \begin{align}
      \vec{x}_{k+1} &= \underline{A}_{k} \cdot \vec{x}_k + \underline{B}_k \cdot \vec{u}_k \\
      \vec{y}_k &= \underline{C}_{k} \cdot \vec{x}_k + \underline{D}_{k} \cdot \vec{u}_k
      \end{align}

      Here, \(\underline{A}\) is the state tran­si­tion matrix that maps the cur­rent state vec­tor \(\vec{x}_k\) into the future vec­tor \(\vec{x}_{k+1}\) and \(\underline{B}\) is a matrix that trans­forms the con­trol input \(\vec{u}\) onto the state vec­tor \(\vec{x}\) . In the sec­ond equa­tion, \(\underline{C}\) is a matrix that trans­forms the inter­nal state \(\vec{x}_k\) into an out­put vec­tor \(\vec{y}_k\) and \(\underline{D}\) is a matrix that describes an addi­tive feedthrough effect of the con­trol vec­tor \(\vec{u}_k\) onto the out­put vec­tor \(\vec{y}_k\) .

      x_next = A*x + B*u;
      y      = C*x + D*u;
      
      % then step forward in time
      k = k + 1; x = x_next;
      

      Giv­en that the lin­ear sys­tem described above has a zero feedthrough term \(\underline{D} = \underline{0}\) (i.e. \(\underline{D}\) doesn’t exist), the reg­u­lar Kalman filter’s equa­tions are giv­en as fol­lows.

      Kalman prediction equations (time update)

      Let \(\tilde{\vec{x}}_{k|k}\) (read: the cor­rect­ed \(\vec{x}\) at time \(k\) giv­en infor­ma­tion from time \(k\) ) be the cur­rent time-step state vec­tor giv­en any cor­rec­tions using the last mea­sure­ments — or the best guess we have of that state vec­tor — and \(\tilde{\underline{P}}_{k|k}\) be its covari­ance (or its best guess we have).

      Also let \(\underline{Q}_{i,k}\) be the matrix of the input covari­ances — i.e. the covari­ances in the input vec­tor — and \(\underline{Q}_{p,k}\) the process noise covari­ance matrix.

      If the control input vector is known

      The next state \(\vec{x}_{k+1|k}\) (at time index $k+1$ giv­en infor­ma­tion of time \(k\) ) is esti­mat­ed using the cur­rent (cor­rect­ed) state \(\vec{x}_{k|k}\) and the cur­rent con­trol input. The next state covari­ance \(\underline{P}_{k+1|k}\) is esti­mat­ed using the cur­rent (cor­rect­ed) covari­ance, the input covari­ance and the process noise, using:


      \begin{align}
      \hat{\vec{x}}_{k+1|k} &= \underline{A}_k \cdot \tilde{\vec{x}}_{k|k} + \underline{B}_k \cdot \vec{u}_{k} \\
      \hat{\underline{P}}_{k+1|k} &= \underline{A}_k \cdot \tilde{\underline{P}}_{k|k} \cdot \underline{A}^T_k \cdot \frac{1}{\lambda^2_k} + \underline{B}_k \cdot \underline{Q}_{i,k} \cdot \underline{B}^T_k + \underline{Q}_{p,k}
      \end{align}
      % a priori x and P
      estimated_next_x = A * corrected_current_x + B * current_u;
      estimated_next_P = A * corrected_current_P * A' + 1/(lambda^2) + ...
                         B * Q_input * B' + Q_process;
      

      If the control input vector is unknown

      The next state \(\vec{x}_{k+1|k}\) (at time index \(k+1\) giv­en infor­ma­tion of time \(k\) ) is only esti­mat­ed using the cur­rent (cor­rect­ed) state \(\vec{x}_{k|k}\) while the next state covari­ance \(\underline{P}_{k+1|k}\) is esti­mat­ed using only the cur­rent (cor­rect­ed) covari­ance and the process noise, using:


      \begin{align}
      \hat{\vec{x}}_{k+1|k} &= \underline{A}_k \cdot \tilde{\vec{x}}_{k|k} \\
      \hat{\underline{P}}_{k+1|k} &= \underline{A}_k \cdot \tilde{\underline{P}}_{k|k} \cdot \underline{A}^T_k \cdot \frac{1}{\lambda^2_k} + \underline{Q}_{p,k}
      \end{align}
      % a priori x and P
      estimated_next_x = A * corrected_current_x;
      estimated_next_P = A * corrected_current_P * A' + 1/(lambda^2) + Q_process;
      

      In both cas­es, \( \lambda \) (intro­duced by Prof. Dr.-Ing. Som­mer) with \( 0 \lamb­da \leq 1 \) is an empir­i­cal fac­tor that forcibly increas­es the state covari­ance and thus the filter’s uncer­tain­ty in order to pre­vent con­ver­gence or to recov­er from con­ver­gence in sit­u­a­tions where exter­nal state changes are known to have hap­pened after the fil­ter con­verged (when \( \underline{K}_{k+1} \cdot \tilde{\vec{y}}_{k+1} = 0\), see below). In the pres­ence of a input and/or process noise covari­ance matrix \( \underline{Q}_{k}\) a val­ue of \( \lamb­da \simeq 1\) (i.e. \( 0 \lll \lamb­da \leq 1\)) is rec­om­mend­ed in order to pre­vent sin­gu­lar­i­ties or insta­bil­i­ties.

      Kalman correction equations (measurement update)

      Let \(\vec{z}\) be a vec­tor of obser­va­tions of the sys­tem and \(\underline{H}\) be a matrix that maps the state vec­tor \(\vec{x}\) onto the obser­va­tion vec­tor \(\vec{z}\) under the influ­ences of an error as described in the mea­sure­ment noise covari­ance matrix \(\underline{R}\) .
      Using the resid­ual covari­ances \(\underline{S}\) (i.e. the error in the esti­mat­ed covari­ances) and the inno­va­tion term \(\tilde{\vec{y}}\) (a mea­sure­ment of the state esti­ma­tion inac­cu­ra­cy)


      \begin{align}
      \underline{S}_{k+1} &= \underline{H}_{k+1} \cdot \hat{\underline{P}}_{k+1|k} \cdot \underline{H}^T_{k+1} + \underline{R}_{k+1} \\
      \tilde{\vec{y}}_{k+1} &= \vec{z}_{k+1} - \left( \underline{H}_{k+1} \cdot \hat{\vec{x}}_{k+1|k} \right)
      \end{align}
      S          = H * estimated_next_P * H' + R;
      innovation = current_z - (H * estimated_next_x);
      

      the epony­mous Kalman Gain \(\underline{K}\) can be cal­cu­lat­ed as


      \begin{align}
      \underline{K}_{k+1} &= \hat{\underline{P}}_{k+1|k} \cdot \underline{H}^T_{k+1} \cdot \underline{S}^{-1}_{k+1}
      \end{align}
      K = estimated_next_P * H' / S; % note the inversion inv(S)
      

      With that, the pre­vi­ous­ly deter­mined esti­ma­tions \(\hat{\vec{x}}_{k+1|k}\) and \(\hat{\underline{P}}_{k+1|k}\) can be cor­rect­ed using


      \begin{align}
      \tilde{\vec{x}}_{k+1|k+1} &= \hat{\vec{x}}_{k+1|k} + \underline{K}_{k+1} \cdot \tilde{\vec{y}}_{k+1} \\
      \tilde{\underline{P}}_{k+1|k+1} &= \left( \underline{1} - \underline{K}_{k+1} \cdot \underline{H}_{k+1} \right) \cdot \hat{\underline{P}}_{k+1|k}
      \end{align}

      With \(\underline{1}\) being the unit matrix of the same dimen­sions as \(\underline{P}\) (i.e. the num­ber of states × the num­ber of states).

      % a posteriori x and P
      corrected_next_x = estimated_next_x + K * innovation;
      corrected_next_P = (eye(size(P)) - K * H) * estimated_next_P;
      

      An alter­na­tive (and more com­plex) way to cal­cu­late the a pos­te­ri­ori covari­ance matrix in pres­ence of a non-opti­mal Kalman gain is known as the Joseph form and is giv­en as


      \begin{align}
      \underline{J}_k &= \left( \underline{1} - \underline{K}_{k+1} \cdot \underline{H}_{k+1} \right) \\
      \tilde{\underline{P}}_{k+1|k+1} &= \underline{J} \cdot \hat{\underline{P}}_{k+1|k} \cdot \underline{J}^T + \underline{K}_{k+1} \cdot \underline{R}_{k+1} \cdot \underline{K}^T_{k+1}
      \end{align}
      % a posteriori P
      J                = (eye(size(K)) - K * H);
      corrected_next_P = J*estimated_next_P*J' + K*R*K;
      

      How­ev­er, assum­ing that the Kalman gain is actu­al­ly opti­mal is often a rea­son­ably good approach. After that, we move for­ward in time.

      k = k + 1;
      corrected_current_x = corrected_next_x;
      corrected_current_P = corrected_next_P;
      

      As can be seen in the Kalman gain equa­tion, for any rea­son­ably com­plex sys­tem with more than three states, the inver­sion of the resid­ual covari­ance matrix \(\underline{S}\) gets expo­nen­tial­ly com­pu­ta­tion­al­ly inten­sive. This matrix is sym­met­ric pos­i­tive-def­i­nite though, so the Cholesky decom­po­si­tion can be used to effi­cient­ly invert the matrix. Code for that can be found e.g. in the Effi­cient Java Matrix Library or as a C++ port in my own Kalman fil­ter library kalman-clib (see cholesky_decompose_lower() for decom­po­si­tion into a low­er tri­an­gu­lar matrix and matrix_invert_lower() for inver­sion of a low­er-tri­an­gu­lar­ized matrix).

      August 27th, 2014 GMT +1 von
      Markus
      2014-08-27T03:38:23+01:00 2018-03-4T15:12:59+01:00 · 0 Kommentare
      reference
      MATLAB Signal Processing Kalman Filter

      Hinterlasse einen Kommentar

      Hier klicken, um das Antworten abzubrechen.

    1. « newer
    2. 1
    3. …
    4. 15
    5. 16
    6. 17
    7. 18
    8. 19
    9. 20
    10. 21
    11. …
    12. 43
    13. older »
    • Kategorien

      • .NET
        • ASP.NET
        • Core
        • DNX
      • Allgemein
      • Android
      • Data Science
      • Embedded
      • FPGA
      • Humor
      • Image Processing
      • Kalman Filter
      • Machine Learning
        • Caffe
        • Hidden Markov Models
        • ML Summarized
        • Neural Networks
        • TensorFlow
      • Mapping
      • MATLAB
      • Robotik
      • Rust
      • Signal Processing
      • Tutorial
      • Version Control
    • Neueste Beiträge

      • Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?
      • Use your conda environment in Jupyter Notebooks
      • Building OpenCV for Anaconda Python 3
      • Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Getting an image into and out of TensorFlow
    • Kategorien

      .NET Allgemein Android ASP.NET Caffe Core Data Science DNX Embedded FPGA Hidden Markov Models Humor Image Processing Kalman Filter Machine Learning Mapping MATLAB ML Summarized Neural Networks Robotik Rust Signal Processing TensorFlow Tutorial Version Control
    • Tags

      .NET Accelerometer Anaconda Bitmap Bug Canvas CLR docker FPGA FRDM-KL25Z FRDM-KL26Z Freescale git Gyroscope Integration Drift Intent J-Link Linear Programming Linux Magnetometer Matlab Mono Naismith OpenCV Open Intents OpenSDA Optimization Pipistrello Player/Stage PWM Python Sensor Fusion Simulink Spartan 6 svn tensorflow Tilt Compensation TRIAD ubuntu Windows Xilinx Xilinx SDK ZedBoard ZYBO Zynq
    • Letzte Kommetare

      • Lecke Mio bei Frequency-variable PWM generator in Simulink
      • Vaibhav bei Use your conda environment in Jupyter Notebooks
      • newbee bei Frequency-variable PWM generator in Simulink
      • Markus bei Using TensorFlow’s Supervisor with TensorBoard summary groups
      • Toke bei Using TensorFlow’s Supervisor with TensorBoard summary groups
    • Blog durchsuchen

    • August 2014
      M D M D F S S
      « Jun   Sep »
       123
      45678910
      11121314151617
      18192021222324
      25262728293031
    • Self

      • Find me on GitHub
      • Google+
      • Me on Stack­Ex­change
      • Ye olde blog
    • Meta

      • Anmelden
      • Beitrags-Feed (RSS)
      • Kommentare als RSS
      • WordPress.org
    (Generiert in 0,309 Sekunden)

    Zurück nach oben.