## Kategoriearchiv

Du betrachtest das Archiv der Kategorie Signal Processing.

• ## Regular Kalman Filter (almost) super-quick Reference

To make long things short, here’s the complete Matlab code.

% State estimations
x       state vector                  (M x 1)
A       state transition matrix       (M x M)
P       state covariance matrix       (M x M)

% Input / control data
u       input vector                  (N x 1)
B       input transition matrix       (M x N)
Q       input noise covariance matrix (N x N)

% Observations
z       observation vector            (Z x 1)
H       state-to-observation matrix   (Z x M)
R       observation noise covariance  (Z x Z)

% tuning
lambda  tuning parameter              (scalar)

function [x, P] = kf_predict (x, A, P, lambda , u, B, Q)
x = A*x + B*u; % a priori state prediction
P = A*P*A' * 1/( lambda ^2) + B*Q*B'; % a priori covariance
end

function [x, P] = kf_update (x, z, P, H, R)
y = z - H*x; % measurement residuals ("innovation")
S = H*P*H' + R; % residual (innovation) covariance
K = P*H' / S; % Kalman gain
x = x + K*y; % a posteriori state prediction
P = (eye(size(P)) - K*H)*P; % a posteriori covariance matrix
end

• ## On integration drift

While implementing a sensor fusion algorithm I stumbled across the problem that my well-calibrated gyroscope would yield slowly drifting readings for the integrated angles.
There are at least two reasons for this behaviour: It is possible that the gyro bias was not removed exactly – not so much because it’s a stochastic quantity, but more because it’s a machine precision problem after all – and drift induced during numeric integration due to rounding errors.

Fourier states that every (infinite and periodic) signal can be assembled by using only cosine and sine functions. Gaussian noise has a mean amplitude in all frequencies, but still a gaussian amplitude distribution. In other words: Gaussian noise contains differently strong sine and cosine signals for every frequency.

Now the integral of the sine and cosine functions is defined as follows:

\begin{align} \int cos(2 \pi f t) &= \quad \frac{1}{2 \pi f} sin(2 \pi f t) \\ \int sin(2 \pi f t) &= -\frac{1}{2 \pi f} cos(2 \pi f t) \end{align}

What that means is that for every high frequency sine-like signal (i.e. $f \geq 1 \mathrm{Hz}$) , the amplitude of that signal will be lowered by factor $2 \pi f$. For every low frequency signal (i.e. $f \lt 1 \mathrm{Hz}$) the frequency will be amplified by $2 \pi f$.

Now it’s just a question if your signal contains gaussian noise or if your system oscillates. In any way, if there is a low frequency component, integration will turn it into a strong, slow sine wave shape – drift.

• ## To tilt compensate, or not to tilt compensate

When diving into the field of attitude detection from sensor readings a couple of answers seem omnipresent:

• You need an accelerometer and a magnetometer.
• You need to tilt compensate.
• You need to low-pass filter.
• Or use the complimentary filter.
• Or better still, use the mystical Kalman filter (but it’s too complicated to explain).

Starting from the above and having an accelerometer and a magnetometer at hand, the old question a wise man once asked is:

To tilt compensate, or not to tilt compensate, that is the question—
Whether ‚tis Nobler in the mind to suffer
The Slings and Arrows of outrageous StackOverflow answers,
Or to take Arms against a Sea of troubles,
And by using the attitude matrix, end them? To tilt: to roll.

In other words: Is tilt compensation required? Weiterlesen »

• ## libfixkalman: Fixed-Point Kalman Filter in C

In need for a Kalman filter on an embedded system I was looking for a linear algebra library. It turned out that there are quite a bunch of libraries written in C++, mostly template based, yet nothing lean and mean written in ANSI C. I ended up porting parts of EJML to C and picked only what I needed for the Kalman filter (see the result, kalman-clib, if you are interested). The result was a working, relatively fast library using floats. Or doubles, if the user prefers.

However, a big problem is that many embedded targets (say, ARM Cortex-M0/3) don’t sport a floating-point unit, so the search went on for a fixed-point linear algebra library. Thanks to a hint on StackOverflow it ended at libfixmath, which implements Q16 numbers (i.e. 16.16 on 32bit integer) and is released under an MIT license.

Luckily, some other guy created a linear algebra library around it, called libfixmatrix (MIT, too). I started to port my Kalman library to libfixmatrix and, voilà, libfixkalman was born (you can find it in the libfixkalman GitHub repository).