## Ergebnisse der Tages-Archivsuche

Du hast den Blog nach dem 15. Februar 2014 durchsucht. Hier ist, was sich finden ließ.

• ## Kalman filter: Modeling integration drift

One inter­est­ing obser­va­tion when work­ing with the stan­dard mod­el for con­stant accel­er­a­tion in the Kalman fil­ter is that the results tend to drift over time, even if the input to the sys­tem is zero and unbi­ased. I stum­bled across this recent­ly when inte­grat­ing angu­lar veloc­i­ties mea­sured using a gyro­scope. Obvi­ous­ly, cal­i­brat­ing the gyro­scope is the first step to take, but even then, after a while, the esti­ma­tion will be off.

So the dif­fer­en­tial equa­tions describ­ing motion with con­stant accel­er­a­tion are giv­en as

\begin{align}
x(t) &= x_0 + v(t)\,\mathrm dt + \frac{1}{2}a(t)\,\mathrm dt^2 \\
v(t) &= v_0 + a(t)\,\mathrm dt \\
a(t) &= \mathrm{const}
\end{align}

The con­tin­u­ous-time state-space rep­re­sen­ta­tion of which being

\begin{align}
\dot{\vec{x}}(t) = \underbrace{\begin{bmatrix}
0 & \mathrm dt & 0.5\,\mathrm dt^2 \\
0 & 0 & \mathrm dt \\
0 & 0 & 0
\end{bmatrix}}_{\underline{A}} \cdot \underbrace{\begin{bmatrix}
x \\
v \\
a
\end{bmatrix}}_{\vec{x}}
\end{align}

where the state vec­tor $\vec{x}$ would be ini­tial­ized with $\left[x_0, v_0, a_0\right]^T$ . Mod­eled as a dis­crete-time sys­tem, we then have

\begin{align}
\vec{x}_{k+1} = \begin{bmatrix}
1 & T & 0.5\,\mathrm T^2 \\
0 & 1 & \mathrm T \\
0 & 0 & 1
\end{bmatrix}_k \cdot \begin{bmatrix}
x \\
v \\
a
\end{bmatrix}_k
\end{align}

with $T$ being the time con­stant.

Now due to machine pre­ci­sion and round­ing issues we’ll end up with an error in every time step that is prop­a­gat­ed from the accel­er­a­tion to the posi­tion through the dou­ble inte­gra­tion. Even if we could rule out these prob­lems, we still would have to han­dle the case of drift caused by noise.

Accord­ing to Posi­tion Recov­ery from Accelero­met­ric Sen­sors (Anto­nio Fil­ieri, Rossel­la Mel­chiot­ti) and Error Reduc­tion Tech­niques for a MEMS Accelerom­e­ter-based Dig­i­tal Input Device (Tsang Chi Chiu), the inte­gra­tion drift can be mod­eled as process noise in the Kalman fil­ter.

Tsang (appen­dix B, eq. 7) shows that the drift noise is giv­en as

\begin{align}
\underline{Q}_a = \begin{bmatrix}
\frac{1}{20} q_a \,T^5 & \frac{1}{8} q_a \,T^4 & \frac{1}{6} q_a \,T^3 \\
\frac{1}{8} q_a \,T^4 & \frac{1}{3} q_a \,T^3 & \frac{1}{2} q_a \,T^2 \\
\frac{1}{6} q_a \,T^3 & \frac{1}{2} q_a \,T^2 & q_a \,T
\end{bmatrix}
\end{align}

with $q_a$ being the accel­er­a­tion process noise (note that Tsang mod­els this as $q_c$ in con­tin­u­ous-time).

• ## Linear (binary) integer programming in Matlab

So, sup­pose you’re in uni­ver­si­ty, it’s that time of the year again (i.e. exams) and you already have writ­ten some of them. Some are still left though and you won­der: How hard can you fail — or: how good do you have to be — in the fol­low­ing exams giv­en that you do not want your mean grade to be worse than a giv­en val­ue.

## Linear programming

Say you’re in Ger­many and the pos­si­ble grades are [1, 1.3, 1.7, 2, .. 4] (i.e. a closed inter­val) with 1 being the best grade and 4 being only a minor step to a major fuck­up. Giv­en that you’ve already writ­ten four exams with the grades 1, 1, 1.3 and 1 and that you do not want to have a mean grade worse than 1.49 in the end (because 1.5 would be round­ed to 2 on your diplo­ma), but there still are 9 more exams to write, the ques­tion is: Which worst-case grades can you have in the fol­low­ing exams and what would that imply for the oth­ers?

This is what’s known to be a lin­ear pro­gram­ming or lin­ear opti­miza­tion prob­lem, and since the val­ues (here: the num­ber of grades) are con­strained to be dis­crete val­ues, it’s inte­ger pro­gram­ming.

The goal of lin­ear pro­gram­ming is to find the argu­ments $x$ of the objec­tive func­tion $f(x)$ such that $f(x)$ is max­i­mized, giv­en some con­straints on $x$ . In Mat­lab, all lin­ear pro­gram­ming func­tions try to min­i­mize the cost func­tion, so the prob­lem is for­mu­lat­ed as

\begin{align}
\underset{x}{\mathrm{min}} \, f(\vec{x}) \, \mathrm{such}\,\mathrm{that} \, \left\{\begin{matrix} \underline{A}\cdot \vec{x} \leq \vec{b} \\ \underline{A}_{eq} \cdot \vec{x} = \vec{b}_{eq} \end{matrix}\right.
\end{align}

Obvi­ous­ly, max­i­miz­ing an objec­tive func­tion is the same as min­i­miz­ing it’s neg­a­tive, so $\mathrm{max} \, f(\vec{x}) = -\mathrm{min} \left(\, -f(\vec{x}) \right)$ . In Mat­lab, these kind of prob­lems can be solved with the linprog func­tion.