• Pipistrello rev. 2.0

I just received my Pip­istrel­lo board and it’s damn beau­ti­ful! (Unfor­tu­nate­ly the site seems to be down some­times; It looks like a DNS prob­lem to me. Just try again lat­er if the link does not work.)

On the front side, there’s the Spar­tan 6 LX45 FPGA (XC6SLX45-2CSG324), an FT2232H FTDI chip (one chan­nel wired for cus­tom use), the HDMI, USB, MicroSD and Head­phone con­nec­tors, a but­ton and six LEDs as well as a whole bunch of GPIOs.

On the bot­tom the DRAM chip can be found.

Ship­ment to Ger­many took less than a week (apart from being held back by cus­toms) and in addi­tion to a track­ing num­ber Mag­nus, the cre­ator of the Pip­istrel­lo board, sent a mail regard­ing addi­tion­al infor­ma­tion to the board, as well as sources and exam­ple code.

The board comes pre-con­fig­ured with a MicroB­laze prozes­sor run­ning a Lin­ux sys­tem that iden­ti­fies itself as Linux Pipistrello-LX45 3.6.0-11207-ga0d271c #12 Sun Dec 9 11:56:59 EST 2012 microblaze GNU/Linux. CPU and archi­tec­ture infor­ma­tion can be grabbed with cat /proc/cpuinfo, as can be seen on the fol­low­ing screen­shot of PuT­TY.

All in all a very nice expe­ri­ence.

• Kalman filter: Modeling integration drift

One inter­est­ing obser­va­tion when work­ing with the stan­dard mod­el for con­stant accel­er­a­tion in the Kalman fil­ter is that the results tend to drift over time, even if the input to the sys­tem is zero and unbi­ased. I stum­bled across this recent­ly when inte­grat­ing angu­lar veloc­i­ties mea­sured using a gyro­scope. Obvi­ous­ly, cal­i­brat­ing the gyro­scope is the first step to take, but even then, after a while, the esti­ma­tion will be off.

So the dif­fer­en­tial equa­tions describ­ing motion with con­stant accel­er­a­tion are giv­en as

\begin{align}
x(t) &= x_0 + v(t)\,\mathrm dt + \frac{1}{2}a(t)\,\mathrm dt^2 \\
v(t) &= v_0 + a(t)\,\mathrm dt \\
a(t) &= \mathrm{const}
\end{align}

The con­tin­u­ous-time state-space rep­re­sen­ta­tion of which being

\begin{align}
\dot{\vec{x}}(t) = \underbrace{\begin{bmatrix}
0 & \mathrm dt & 0.5\,\mathrm dt^2 \\
0 & 0 & \mathrm dt \\
0 & 0 & 0
\end{bmatrix}}_{\underline{A}} \cdot \underbrace{\begin{bmatrix}
x \\
v \\
a
\end{bmatrix}}_{\vec{x}}
\end{align}

where the state vec­tor $\vec{x}$ would be ini­tial­ized with $\left[x_0, v_0, a_0\right]^T$ . Mod­eled as a dis­crete-time sys­tem, we then have

\begin{align}
\vec{x}_{k+1} = \begin{bmatrix}
1 & T & 0.5\,\mathrm T^2 \\
0 & 1 & \mathrm T \\
0 & 0 & 1
\end{bmatrix}_k \cdot \begin{bmatrix}
x \\
v \\
a
\end{bmatrix}_k
\end{align}

with $T$ being the time con­stant.

Now due to machine pre­ci­sion and round­ing issues we’ll end up with an error in every time step that is prop­a­gat­ed from the accel­er­a­tion to the posi­tion through the dou­ble inte­gra­tion. Even if we could rule out these prob­lems, we still would have to han­dle the case of drift caused by noise.

Accord­ing to Posi­tion Recov­ery from Accelero­met­ric Sen­sors (Anto­nio Fil­ieri, Rossel­la Mel­chiot­ti) and Error Reduc­tion Tech­niques for a MEMS Accelerom­e­ter-based Dig­i­tal Input Device (Tsang Chi Chiu), the inte­gra­tion drift can be mod­eled as process noise in the Kalman fil­ter.

Tsang (appen­dix B, eq. 7) shows that the drift noise is giv­en as

\begin{align}
\underline{Q}_a = \begin{bmatrix}
\frac{1}{20} q_a \,T^5 & \frac{1}{8} q_a \,T^4 & \frac{1}{6} q_a \,T^3 \\
\frac{1}{8} q_a \,T^4 & \frac{1}{3} q_a \,T^3 & \frac{1}{2} q_a \,T^2 \\
\frac{1}{6} q_a \,T^3 & \frac{1}{2} q_a \,T^2 & q_a \,T
\end{bmatrix}
\end{align}

with $q_a$ being the accel­er­a­tion process noise (note that Tsang mod­els this as $q_c$ in con­tin­u­ous-time).

• Linear (binary) integer programming in Matlab

So, sup­pose you’re in uni­ver­si­ty, it’s that time of the year again (i.e. exams) and you already have writ­ten some of them. Some are still left though and you won­der: How hard can you fail — or: how good do you have to be — in the fol­low­ing exams giv­en that you do not want your mean grade to be worse than a giv­en val­ue.

Linear programming

Say you’re in Ger­many and the pos­si­ble grades are [1, 1.3, 1.7, 2, .. 4] (i.e. a closed inter­val) with 1 being the best grade and 4 being only a minor step to a major fuck­up. Giv­en that you’ve already writ­ten four exams with the grades 1, 1, 1.3 and 1 and that you do not want to have a mean grade worse than 1.49 in the end (because 1.5 would be round­ed to 2 on your diplo­ma), but there still are 9 more exams to write, the ques­tion is: Which worst-case grades can you have in the fol­low­ing exams and what would that imply for the oth­ers?

This is what’s known to be a lin­ear pro­gram­ming or lin­ear opti­miza­tion prob­lem, and since the val­ues (here: the num­ber of grades) are con­strained to be dis­crete val­ues, it’s inte­ger pro­gram­ming.

The goal of lin­ear pro­gram­ming is to find the argu­ments $x$ of the objec­tive func­tion $f(x)$ such that $f(x)$ is max­i­mized, giv­en some con­straints on $x$ . In Mat­lab, all lin­ear pro­gram­ming func­tions try to min­i­mize the cost func­tion, so the prob­lem is for­mu­lat­ed as

\begin{align}
\underset{x}{\mathrm{min}} \, f(\vec{x}) \, \mathrm{such}\,\mathrm{that} \, \left\{\begin{matrix} \underline{A}\cdot \vec{x} \leq \vec{b} \\ \underline{A}_{eq} \cdot \vec{x} = \vec{b}_{eq} \end{matrix}\right.
\end{align}

Obvi­ous­ly, max­i­miz­ing an objec­tive func­tion is the same as min­i­miz­ing it’s neg­a­tive, so $\mathrm{max} \, f(\vec{x}) = -\mathrm{min} \left(\, -f(\vec{x}) \right)$ . In Mat­lab, these kind of prob­lems can be solved with the linprog func­tion.

• Hosting the .NET 4 runtime in a native process

I recent­ly remem­bered a fel­low stu­dent who, a cou­ple of years ago, told me that he inject­ed a (native) DLL into a for­eign process and start­ed the .NET run­time there, that is, had C# code run in a for­eign native process. Well, bright eyes on my side and then I for­got about it.

Any­way, some days ago I gave it a try and it found out that this isn’t too hard to achieve. Run­ning your .NET assem­bly in a native appli­ca­tion requires basi­cal­ly four steps using the .NET 4 Host­ing Inter­faces:

1. Retriev­ing an inter­face to the CLR (meta) host.
2. Request­ing an inter­face to the required run­time.
3. Retriev­ing the actu­al inter­face of the run­time.
4. Exe­cut­ing the assem­bly in the default appli­ca­tion domain.

So here we go, ignor­ing all return codes on the way: Weit­er­lesen »

• On integration drift

While imple­ment­ing a sen­sor fusion algo­rithm I stum­bled across the prob­lem that my well-cal­i­brat­ed gyro­scope would yield slow­ly drift­ing read­ings for the inte­grat­ed angles.
There are at least two rea­sons for this behav­iour: It is pos­si­ble that the gyro bias was not removed exact­ly — not so much because it’s a sto­chas­tic quan­ti­ty, but more because it’s a machine pre­ci­sion prob­lem after all — and drift induced dur­ing numer­ic inte­gra­tion due to round­ing errors.

Fouri­er states that every (infi­nite and peri­od­ic) sig­nal can be assem­bled by using only cosine and sine func­tions. Gauss­ian noise has a mean ampli­tude in all fre­quen­cies, but still a gauss­ian ampli­tude dis­tri­b­u­tion. In oth­er words: Gauss­ian noise con­tains dif­fer­ent­ly strong sine and cosine sig­nals for every fre­quen­cy.

Now the inte­gral of the sine and cosine func­tions is defined as fol­lows:

\begin{align}
\int cos(2 \pi f t) &= \quad \frac{1}{2 \pi f} sin(2 \pi f t) \\
\int sin(2 \pi f t) &= -\frac{1}{2 \pi f} cos(2 \pi f t)
\end{align}

What that means is that for every high fre­quen­cy sine-like sig­nal (i.e. $f \geq 1 \mathrm{Hz}$ ) , the ampli­tude of that sig­nal will be low­ered by fac­tor $2 \pi f$ . For every low fre­quen­cy sig­nal (i.e. $f \lt 1 \mathrm{Hz}$ ) the fre­quen­cy will be ampli­fied by $2 \pi f$ .

Now it’s just a ques­tion if your sig­nal con­tains gauss­ian noise or if your sys­tem oscil­lates. In any way, if there is a low fre­quen­cy com­po­nent, inte­gra­tion will turn it into a strong, slow sine wave shape — drift.