# Pressure sensor, absolute, differential and gauge ?

What’s the difference between absolute, differential and gauge pressure sensor ?

• Absolute pressure sensor measures a pressure zero-referenced against a perfect vacuum, equal to gauge pressure plus atmospheric pressure.

• Differential pressure sensor measures the difference of pressure between two points.

• Gauge pressure sensor measure a pressure zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure.

# Linear regression

The aim of this post is to explain how a linear regression is calculated with a very simple example. Assume we want to approximate a cloud of point with the following line : $y=ax$ as illustrated bellow :

We want to minimize the error given by the square of the difference between points and the line $y=ax$:

$E=(y_1-a.x_1)^2+(y_2-a.x_2)^2+...+(y_n-a.x_n)^2$

By expending each term, the equation becomes :

$E=y_1^2+a^2.x_1^2-2a.y_1.x_1 + y_2^2+a^2.x_2^2-2a.y_2.x_2 + ... + y_n^2+a^2.x_n^2-2a.y_n.x_n$
$E=y_1^2 + y_2^2 + ... + y_n^2 + a^2(x_1^2 +x_2^2 +...+x_n^2) - 2a(y_1.x_1 + y_2.x_2 +y_n.x_n)$

As we want to minimize the error, the value of $a$ we are looking for is necessary a zero of the derivative of the error. Let’s calculate the derivative :

$\frac{\partial E}{\partial a} = 2a(x_1^2 +x_2^2 +...+x_n^2) - 2(y_1.x_1 + y_2.x_2 +y_n.x_n)$

The error is minimized when $\frac{\partial E}{\partial a} =0$ :

$2a(x_1^2 +x_2^2 +...+x_n^2) - 2(y_1.x_1 + y_2.x_2 +y_n.x_n)=0$

It is now easy to calculate $a$ :

$a=\frac{2(y_1.x_1 + y_2.x_2 +y_n.x_n)}{ 2(x_1^2 +x_2^2 +...+x_n^2) }$
$a=\frac{ y_1.x_1 + y_2.x_2 +y_n.x_n}{ x_1^2 +x_2^2 +...+x_n^2 }$
$a=\frac { \sum\limits_{i=1}^n x_i.y_i } {\sum\limits_{i=1}^n x_i^2}$

# Quaternion normalization

Let’s consider the following quaternion:

$Q=a+b.i+c.j+d.k = \left[ \begin{matrix}a && b && c && d\end{matrix}\right]$

The normalization is given by the following formula:

$Q_{normalized}=\frac{Q}{\| Q \| }$

Where :

$\| Q \| = \sqrt{ q.\bar{q} } = \sqrt{ \bar{q}.q }=\sqrt {a^2 + b^2 + c^2 + d^2}$

General representation:

$Q_{normalized}=\left[\begin{matrix}\frac{a}{\sqrt {a^2 + b^2 + c^2 + d^2}} && \frac{b}{\sqrt {a^2 + b^2 + c^2 + d^2}} && \frac{c}{\sqrt {a^2 + b^2 + c^2 + d^2}} && \frac{d}{\sqrt {a^2 + b^2 + c^2 + d^2}} \end{matrix}\right]$

# Rotation matrices

## Rotation around the X-axis

$R_x(\alpha)=\left[\begin{matrix}1 && 0 && 0 && 0 \\0 && cos(\alpha) && -sin(\alpha) && 0 \\0 && sin(\alpha) && cos(\alpha) && 0 \\0 && 0 && 0 && 1\end{matrix}\right]$

## Rotation around the Y-axis

$R_y(\beta)=\left[\begin{matrix} cos(\beta) && 0 && sin(\beta) && 0 \\0 && 1 && 0 && 0 \\-sin(\beta) && 0 && cos(\beta) && 0 \\0 && 0 && 0 && 1\end{matrix}\right]$

## Rotation around the Z-axis

$R_z(\gamma)=\left[\begin{matrix} cos(\gamma) && -sin(\gamma) && 0 && 0 \\sin(\gamma) && cos(\gamma) && 0 && 0 \\0 && 0 && 1 && 0 \\0 && 0 && 0 && 1\end{matrix}\right]$

# Quaternion conjugate

The quaternion conjugate is denoted by $^*$. Considere the quaternion $Q$ defined by:

$Q = \left[ \begin{matrix} a && b && c && d \end{matrix} \right]$

The conjugate of $Q$ is defined by:

$Q^* = \left[ \begin{matrix} a && -b && -c && -d \end{matrix} \right]$

The conjugate can be used to swap the relative frames described by an orientation. For example, if ${}^BQ_A$ describes the orientation of frame $B$ relative to frame $A$, the conjugate of ${}^BQ_A^*={}^AQ_B$ and describes the orientation of frame $A$ relative to frame $B$.

# Quaternion product

The quaternion product is denoted by $\otimes$. Consider two quaternions $Q_1$ and $Q_2$ :

$\begin{matrix}Q_1 &=& \left[ \begin{matrix} a_1 && b_1 && c_1 && d_1 \end{matrix} \right] \\Q_2 &=& \left[ \begin{matrix} a_2 && b_2 && c_2 && d_2 \end{matrix} \right]\end{matrix}$

The product is given by:

$Q_1 \otimes Q_2 = \left[\begin{matrix}a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2 \\a_1b_2 + b_1a_2 + c_1d_2 - d_1c_2 \\a_1c_2 - b_1d_2 + c_1a_2 + d_1b_2 \\a_1d_2 + b_1c_2 - c_1b_2 + d_1a_2 \end{matrix}\right]^\top$

The quaternion product is not commutative :

$Q_1 \otimes Q_2 \neq Q_2 \otimes Q_1$

# Quaternions and rotations

A quaternion is a four-dimensional complex number that can be used to represent the orientation of a ridged body or coordinate frame in three-dimensional space. The general definition of a quaternion is given by:

$Q=a+b.i+c.j+d.k = \left[ \begin{matrix}a && b && c && d\end{matrix}\right]$

## Representation

Let’s consider a vector $\vec{V}$ defined by 3 scalars ($V_x$, $V_y$ and $V_z$) and $\theta$ an angle of rotation around $\vec{V}$ :

The quaternion associated to this transformation is given by:

$Q = \left[\begin{matrix}cos \frac{\theta}{2} && -V_x.sin \frac{\theta}{2} &&-V_y.sin \frac{\theta}{2} &&-V_z.sin \frac{\theta}{2} \end{matrix}\right]$

## Rotation around axes

Based on the previous formula, we can now calculate the quaternion defining a rotation around each axis:

Rotation around X

$Q_X=\left[\begin{matrix}cos \frac{\theta}{2} && -sin \frac{\theta}{2} &&0 &&0\end{matrix}\right]$

Rotation around Y

$Q_Y=\left[\begin{matrix}cos \frac{\theta}{2} && 0 &&-sin \frac{\theta}{2} &&0\end{matrix}\right]$

Rotation around Z

$Q_Z=\left[\begin{matrix}cos \frac{\theta}{2} && 0 &&0 &&-sin \frac{\theta}{2} \end{matrix}\right]$

## General rotation

A three dimensional vector (or point) can be rotated by a quaternion using the following relationship based on the quaternion product and quaternion conjugate:

$V_B = {}^BQ_A \otimes V_A \otimes {}^BQ_A^*$

$V_A$ and $V_B$ are the quaternions associated to vectors $\vec{v_A}$ and $\vec{v_B}$. Each quaternion contains a 0 inserted as the first element (rotation angle) to make them 4 element row quaternions, the other elements are given by the vector coordinates.

$\vec{v_A}$ and $\vec{v_B}$ is the same vector described in frame $A$ and frame $B$ respectively.

## Example

This simple example shows the rotation of an ellipse defined in the plane {$\vec{X}$,$\vec{Y}$}:

This example is based on the Madgwick’s quaternion library for Matlab:

# Extended Kalman filter (EKF)

This post details the Kalman filter equations.

## Predict

State prediction:

$\hat{x}_{i}^p = f(\hat{x}_{i-1},u_i)$

Where:

• $\hat{x}_i^p$ is the predicted state at time step $i$.
• $\hat{x}_{i}$ is the estimate of state at time step $i$.
• $f$ is differential function that describes how the state will change according to the previous state (prediction) and the system input ($u_i$).
• $u_i$ is the system input at time step $i$.

Uncertainty (or covariance) prediction:

$P_i^p = F_i.P_{i-1}.F_i^\top + Q_i$

Where:

• $P_i^p$ is the error covariance matrix predicted at time step $i$.
• $P_i$ is the estimated error covariance matrix associated with the estimated state $\hat{x}_{i}$.
• $Q_i$ is the system noise covariance matrix.
• $F_i$ is the transition matrix. It is given by the Jacobian of $f()=\frac{df}{dx}$

## Update

Innovation or measurement residual:

$\widetilde{y}_i = z_i - h(\hat{x}_i^p)$

Where:

• $\widetilde{y}_i$ is a measurement error : this is the difference between the measurement $z_i$ and the estimate measurement from state $\hat{x}_i^p$.
• $z_i$ is an observation (or measurement) from the true state $x_i$.
• $h$ is a differential function which maps the state space into the observed space.

Innovation (or residual) covariance:

$S_i=H_i.P_i^p.H_i^\top+R_i$

Where:

• $S_i$ is the covariance matrix associated to the measurement error $\widetilde{y}_i$.
• $R_i$ is the covariance matrix for the measurement noise.
• $H_i$ is a transition matrix which maps the state space into the observed space. It is given by $H_i=\frac{dh}{dx}$

Near-optimal Kalman gain:

$K_i=P_i^p.H_i^\top.S_i^{-1}$

Where

• $K_i$ is the Kalman gain, this matrix contains the balance between prediction and observations. This matrix will weight the merging between predicted state and observations.

Updated state estimate:

$\hat{x}_i=\hat{x}_i^p + K_i.\widetilde{y}_i$

Updated estimate covariance

$P_i=(I-K_i.H_i).P_i^p$

Where:

• $I$ is the identity matrix.

# Kalman filter

This post details the Kalman filter equations.

## Predict

State prediction:

$\hat{x}_{i}^p = F_i.\hat{x}_{i-1} + B_i.u_i$

Where:

• $\hat{x}_i^p$ is the predicted state at time step $i$.
• $\hat{x}_{i}$ is the estimate of state at time step $i$.
• $F_i$ is the transition matrix. It describes how the state will change according to the previous state (prediction).
• $B_i$ is a matrix that translates control input at time step $i$ into a predicted change in state. In another words, it maps an input vector $u_i$ into the state space.
• $u_i$ is the system input at time step $i$.

Uncertainty (or covariance) prediction:

$P_i^p = F_i.P_{i-1}.F_i^\top + Q_i$

Where:

• $P_i^p$ is the error covariance matrix predicted at time step $i$.
• $P_i$ is the estimated error covariance matrix associated with the estimated state $\hat{x}_{i}$.
• $Q_i$ is the system noise covariance matrix.

## Update

Innovation or measurement residual:

$\widetilde{y}_i = z_i - H_i.\hat{x}_i^p$

Where:

• $\widetilde{y}_i$ is a measurement error : this is the difference between the measurement $z_i$ and the estimate measurement from state $\hat{x}_i^p$.
• $z_i$ is an observation (or measurement) from the true state $x_i$.
• $H_i$ is a transition matrix which maps the state space into the observed space.

Innovation (or residual) covariance:

$S_i=H_i.P_i^p.H_i^\top+R_i$

Where:

• $S_i$ is the covariance matrix associated to the measurement error $\widetilde{y}_i$.
• $R_i$ is the covariance matrix for the measurement noise.

Optimal Kalman gain

$K_i=P_i^p.H_i^\top.S_i^{-1}$

Where

• $K_i$ is the Kalman gain, this matrix contains the balance between prediction and observations. This matrix will weight the merging between predicted state and observations.

Updated state estimate:

$\hat{x}_i=\hat{x}_i^p + K_i.\widetilde{y}_i$

Updated estimate covariance

$P_i=(I-K_i.H_i).P_i^p$

Where:

• $I$ is the identity matrix.

# Kalman filter example

The Kalman filter is a very useful mathematical tool for merging multi-sensor data. We’ll consider a very simple example for understanding how the filter works. Let’s consider a robot that move in a single direction in front of a wall. Assume that the robot is equipped with two sensors : a speed measurement sensor and a distance measurement sensor (range finder). We’ll consider in the following that both sensors are noisy.

Our goal is to estimate, as accurately as possible, the position $x$ of the robot:

Input of the system are a noisy distance measurement and a noisy velocity measurement:

## Source code

close all;
clear all;
clc;

% Step time
dt=0.1;
% Variance of the range finder
SensorStd=0.08;
% Variance of the speed measurement
SpeedStd=0.5;

%% Compute the robot trajectory and simulate sensors
% Time
t=[0:dt:10];
n=size(t,2)
% Trajectory
x=0.05*t.*cos(2*t);
% Accelerometer
u=[0,diff(x)/dt] + SpeedStd*randn(1,n);
% Range finder
z=x-SensorStd*randn(1,n);

% Display position
figure;
plot (t,x);
grid on;
xlabel ('Time [s]');
ylabel ('Position [m]');
title ('Robot real position versus time x=f(t)');

% Display accelation
figure;
plot (t,u,'m');
grid on;
hold on;
plot (t,[0,diff(x)/dt]+3*SpeedStd,'k-.');
plot (t,[0,diff(x)/dt]-3*SpeedStd,'k-.');
xlabel ('Time [s]');
ylabel ('Velocity [m/s]');
title ('Velocity versus time u=f(t)');

% Display range finder
figure;
plot (t,z,'r');
grid on;
hold on;
plot (t,x+3*SensorStd,'k-.');
plot (t,x-3*SensorStd,'k-.');
xlabel ('Time [s]');
ylabel ('Distance [m]');
title ('Distance versus time z=f(t)');

%% Kalman

B=dt;
F=1;
Q=SpeedStd^2;
H=1;
R=SensorStd^2;

x_hat=[0];
y_hat=[0];
P=[0];
S=[0];

for i=2:n

%% Prediction

%State
x_hat(i)=F*x_hat(i-1) + B*u(i-1);

% State uncertainty
P(i)=F*P(i-1)*F' + Q;

%% Update

% Innovation or measurement residual
y_hat(i)=z(i)-H*x_hat(i);

% Innovation (or residual) covariance
S=H*P(i)*H'+R;

K=P(i)*H'*S;

x_hat(i)=x_hat(i)+K*y_hat(i);

P(i)=(1-K*H)*P(i);
end

figure;
plot (t,x,'k');
hold on;
plot (t,x_hat);
plot (t,x_hat+sqrt(P),'r-.');
plot (t,x_hat-sqrt(P),'r-.');
legend('Real position','Estimated position','Uncertainty');

grid on;
xlabel ('Time [s]');
ylabel ('Distance [m]');
title ('Estimated position (speed only)');


This example has been inspired by the very good tutorial of Bradley Hiebert-Treuer “An Introduction to Robot SLAM (Simultaneous Localization And Mapping)

# Normal distribution

The normal (or Gaussian) distribution is a stochastic model commonly used for estimating sensor uncertainty. The law is given by:

$y(x)=\frac{1}{\sigma.\sqrt{2.\pi}} e^ {-\frac{(x-c)^2}{2\sigma^2} }$

Where:

• $\sigma$ is the standard deviation (square root of the variance)
• $c$ is the center of the gaussian

$variance=\sigma^2$

Here are some examples of normal distributions:

As the sum of probabilities must be equal to one thus the following surface is equal to one:

68, 95 and 99.7% of the surface is included in respectively $\sigma$, $2.\sigma$ and $3.\sigma$:

# Singular Value Decomposition of a 2×2 matrix

The following details how to compute the singular value decomposition (SVD) of a 2×2 matrix. For reminder :

$A=\left[ \begin{matrix} a & b \\ c & d \end{matrix} \right]=U \cdot \Sigma \cdot V^T$

where $U$ and $V$ are orthogonal and $\Sigma$ is a diagonal matrix containing the singular values. In the particular case of a 2×2 matrix, the decomposition is given by:

$A=\left[ \begin{matrix} cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta) \end{matrix} \right] \cdot \left[ \begin{matrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{matrix} \right] \cdot \left[ \begin{matrix} cos(\phi) & -sin(\phi) \\ sin(\phi) & cos(\phi) \end{matrix} \right]$

With:

$\begin{matrix}V=C \cdot W=\left[ \begin{matrix} \pm{1} & 0 \\ 0 & \pm{1} \end{matrix} \right]\end{matrix}$

## Computation of $U$

$\theta$ is given by:

$\theta=\frac{1}{2} atan2 (2ac+2bd , a^2+b^2-c^2-d^2)$

The matrix $U$ is given by:

$U=\left[ \begin{matrix} cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta) \end{matrix} \right]$

## Computation of $\Sigma$

The singular values are given by:

$\begin{matrix} \sigma_1=\sqrt{ \frac {S_1 + S_2}{2} } \\\sigma_2=\sqrt{ \frac {S_1 - S_2}{2} } \\\end{matrix}$

With:

$\begin{matrix} S_1 &=& a^2+b^2+c^2+d^2 \\S_2 &=& \sqrt{(a^2+b^2-c^2-d^2)^2 + 4(ac+bd)^2 } \end{matrix}$

The matrix $\Sigma$ is given by:

$\Sigma=\left[ \begin{matrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{matrix} \right]$

## Computation of $V$

First, let’s compute the temporary angle $\Phi$ with the following formula:

$\Phi=\frac{1}{2} atan2 (2ab+2cd , a^2-b^2+c^2-d^2)$

Let’s now compute the following scalars:

$\begin{matrix}s_{11} &=& (a.c_\theta+c.s_\theta).c_\Phi &+& (b.c_\theta+d.s_\theta).s_\Phi \\s_{22} &=& (a.s_\theta-c.c_\theta).s_\Phi &+& (-b.s_\theta+d.c_\theta).c_\Phi \\\end{matrix}$

where:

$\begin{matrix}c_\Phi=cos(\Phi) \\s_\Phi=sin(\Phi) \\c_\theta=cos(\theta) \\ s_\theta=sin(\theta)\end{matrix}$

The matrix $V$ is given by:

$V=\left[ \begin{matrix} \frac {s_{11}} {|s_{11}|}cos(\Phi) & -\frac {s_{22}} {|s_{22}|}sin(\Phi) \\ \frac {s_{11}} {|s_{11}|} sin(\Phi) & \frac {s_{22}} {|s_{22}|}cos(\Phi) \end{matrix} \right]$

Note that $\frac {s_{11}} {|s_{11}|}$ and $\frac {s_{22}} {|s_{22}|}$ are respectively the sign of $s_{11}$ and $s_{22}$. If $s_{11}$ or $s_{22}$ are 0, dividing them by their absolute value will fail in most programming languages. The Matlab sign function returns 0 in this case (Thank you Corbie !).

## Matlab function

%% Matlab function for computing the VSD of a 2x2 matrix
%% Written by Randy Ellis

function [U,SIG,V] = svd2x2(A)
% [U,SIG,V] = svd2x2(A) finds the SVD of 2x2 matrix A
% where U and V are orthogonal, SIG is diagonal,
% and A=U*SIG*V’
% Find U such that U*A*A’*U’=diag
Su = A*A';
phi = 0.5*atan2(Su(1,2)+Su(2,1), Su(1,1)-Su(2,2));
Cphi = cos(phi);
Sphi = sin(phi);
U = [Cphi -Sphi ; Sphi Cphi];

% Find W such that W’*A’*A*W=diag
Sw = A'*A;
theta = 0.5*atan2(Sw(1,2)+Sw(2,1), Sw(1,1)-Sw(2,2));
Ctheta = cos(theta);
Stheta = sin(theta);
W = [Ctheta, -Stheta ; Stheta, Ctheta];

% Find the singular values from U
SUsum= Su(1,1)+Su(2,2);
SUdif= sqrt((Su(1,1)-Su(2,2))^2 + 4*Su(1,2)*Su(2,1));
svals= [sqrt((SUsum+SUdif)/2) sqrt((SUsum-SUdif)/2)];
SIG =diag(svals);

% Find the correction matrix for the right side
S = U'*A*W
C = diag([sign(S(1,1)) sign(S(2,2))]);
V = W*C;


# Finding the transformation between two set of points

The aim of this article is to minimize the error between two clouds of points (minimization of the point to point error). Let’s considere the following cloud of points $A$ in blue and $B$ in red. Our goal is to find the transformation of $B$ (translation and rotation) in order to minimize the distance point to point (according to the least square minimization).

$A=\left[ \begin{matrix} p_1^a & p_2^a & ... & p_n^a \end{matrix} \right]=\left[ \begin{matrix} x_1^a & x_2^a & ... & x_n^a \\ y_1^a & y_2^a & ... & y_n^a \end{matrix} \right]$
$B=\left[ \begin{matrix} p_1^b & p_2^b & ... & p_n^b \end{matrix} \right]=\left[ \begin{matrix} x_1^b & x_2^b & ... & x_n^b \\ y_1^b & y_2^b & ... & y_n^b \end{matrix} \right]$

We assume that the point matching has already being performed: each point of $A$ is associated with a point of $B$.

## Translation

First, compute the center of gravity of each cloud:

$\begin{matrix}COG^a=\frac{1}{n}. \sum\limits_{i=1}^n p_i^a & COG^b=\frac{1}{n}. \sum\limits_{i=1}^n p_i^b \end{matrix}$

Translate each point with a translation vector given by the center of gravity fo its cloud (center each cloud on the origin).

$\begin{matrix} A$

## Rotation

For the rotation, it is a little bit more tricky. First calculate the following matrix:

$N= \sum\limits_{i=1}^n {p$

Compute the singular value decomposition of $N$:

$N=U.\Sigma.V^T$

The rotation matrix is given by:

$R=V.U^T$

The rotation angle can now be extracted from the matrix $R$:

$\alpha=atan2(R_{21},R_{11})$

By applying the rotation on the translated clouds, we get the following result:

This article has been inspired by the very good bachelor thesis of Hans Martin Kjer and Jakob Wilm:

# Least square approximation with a second degree polynomial

Let’s assume we want to approximate a point cloud with a second degree polynomial : $y(x)=ax^2+bx+c$.

The point cloud is given by $n$ points with coordinates ${x_i,y_i}$. The aim is to estimate $\hat{a}$, $\hat{b}$ and $\hat{c}$ where $y(x)=\hat{a}x^2+\hat{b}x + \hat{c}$ will fit the point cloud as mush as possible. We want to minimize for each point $x_i$ the difference between $y_i$ and $y(x_i)$, ie. we want to minimize $\sum\limits_{i=1}^n{(y_i-y(x_i))^2}$. The matrix form of the system is given by:

$\left[ \begin{matrix} {x_1}^2 & x_1 & 1 \\{x_2}^2 & x_2 & 1 \\... & ... & ... \\{x_n}^2 & x_n & 1 \\\end{matrix} \right].\left[ \begin{matrix} \hat{a} \\\hat{b} \\\hat{c} \end{matrix} \right]=\left[ \begin{matrix} y_1 \\y_2 \\... \\y_n \\\end{matrix} \right]$

Let’s define $A$, $B$ and $\hat{x}$:

$\begin{matrix} A=\left[ \begin{matrix} {x_1}^2 & x_1 & 1 \\{x_2}^2 & x_2 & 1 \\... & ... & ... \\{x_n}^2 & x_n & 1 \\\end{matrix} \right] & B=\left[ \begin{matrix} y_1 \\y_2 \\... \\y_n \\\end{matrix} \right]&\hat{x}=\left[ \begin{matrix} \hat{a} \\ \hat{b} \\ \hat{c} \end{matrix} \right] \end{matrix}$

The system is now given by:

$A.\hat{x}=B$

The optimal solution is given by:

$\hat{x}=A^{+}.B = A^{T}(A.A^{T})^{-1}.B$

Where $A^{+}$ is the pseudoinverse of $A$ given by $A^{+}=A^{T}(A.A^{T})^{-1}$.

# Least square linear approximation

This post explains how to make linear approximation with a simple example. Let’s assume we want to approximate a point cloud with a linear function : $y(x)=ax+b$.

The point cloud is given by $n$ points with coordinates ${x_i,y_i}$. The aim is to estimate $\hat{a}$ and $\hat{b}$ where $y(x)=\hat{a}x+\hat{b}$ will fit the point cloud as mush as possible. We want to minimize for each point $x_i$ the difference between $y_i$ and $y(x_i)$, ie. we want to minimize $\sum\limits_{i=1}^n{(y_i-y(x_i))^2}$. The matrix form of the system is given by:

$\left[ \begin{matrix} x_1 & 1 \\x_2 & 1 \\... & ... \\x_n & 1 \\\end{matrix} \right].\left[ \begin{matrix} \hat{a} \\\hat{b}\end{matrix} \right]=\left[ \begin{matrix} y_1 \\y_2 \\... \\y_n \\\end{matrix} \right]$

Let’s define $A$, $B$ and $\hat{x}$:

$\begin{matrix}A=\left[ \begin{matrix} x_1 & 1 \\ x_2 & 1 \\ ... & ... \\ x_n & 1 \\ \end{matrix} \right] &B=\left[ \begin{matrix} y_1 \\y_2 \\... \\y_n \\\end{matrix} \right] &\hat{x}=\left[ \begin{matrix} \hat{a} \\\hat{b}\end{matrix} \right] \end{matrix}$

The system is now given by:

$A.\hat{x}=B$

The optimal solution is given by:

$\hat{x}=A^{+}.B = A^{T}(A.A^{T})^{-1}.B$

Where $A^{+}$ is the pseudoinverse of $A$ given by $A^{+}=A^{T}(A.A^{T})^{-1}$.

The aim of this post deals with ball joint connecting link. The following model is considered:

## Kinematic

This part explains how to calculate $\Theta_2=f(\Theta_1)$.

The angle $\Theta_2$ will be calculated with the following relation:

$\Theta_2=\pi- \widehat{P_2MP_1} -\widehat{P_1MO}$

First, let’s calculate the coordinates of $P_1$:

$P_1=\begin{pmatrix} x_1 \\ y_1 \end{pmatrix} = \begin{pmatrix} l_1.cos(\Theta_1) \\ l_1.sin(\Theta_1) \end{pmatrix}$

Based on the previous result, let’s calculate the distance between $P_1$ and $M$:

$\| P_1M \| = \sqrt{ (x_1-X)^2 + y_1^2 }$

In the triangle $HP_1M$, rectangle in $H$, it becomes easy to calculate the angle $\widehat{P_1MO}$:

$\widehat{P_1MO}=atan2(\|HP_1\|,\|MH\|)=atan2(y_1,X-x_1)$

Now we can calculate the angle $\widehat{P_2MP_1}$ in the triangle $P_1MP_2$ thanks to the Al Kashi’s theorem:

$\|P_1P_2\| = \|MP_1\|^2 + \|MP_2\|^2 -2.\|MP_1\|.\|MP_2\|.cos(\widehat{P_2MP_1})$

$L = \|MP_1\|^2 + {l_2}^2 -2.\|MP_1\|.l_2.cos(\widehat{P_2MP_1})$

and finally:

$cos(\widehat{P_2MP_1}) = \frac{\|MP_1\|^2 + {l_2}^2 - L} {2.\|MP_1\|.l_2}$

In conclusion:

$\Theta_2=\pi \pm acos\left(\frac{ (x_1-X)^2 + y_1^2 + {l_2}^2 - L} {2.l_2.\sqrt{ (x_1-X)^2 + y_1^2 }} \right) - atan2(y_1,X-x_1)$

where:

$\begin{pmatrix} x_1 \\ y_1 \end{pmatrix} = \begin{pmatrix} l_1.cos(\Theta_1) \\ l_1.sin(\Theta_1) \end{pmatrix}$

## Torque

This second part explains how to estimate the torque transmitted. Let’s assume that the actuated rod is the left one (centered on $O$). The torque provided on this rod is $T_1$. The aim is to calculate the torque transmitted ($T_2$) on the second rod (centered on $M$). The purpose of this second part is to express $T_2=g(T_1)$.

First, let’s calculate the coordinates of $P_2$:

$P_2=\begin{pmatrix} x_2 \\ y_2 \end{pmatrix} = \begin{pmatrix} X+l_2.cos(\Theta_1) \\ l_2.sin(\Theta_1) \end{pmatrix}$

Let’s calculate the amplitude of the force $\vec{F_1}$:

$\| \vec{F_1}\| = \frac{T_1}{l_1}$

Now, calculate the angle $\alpha$ between $\vec{F_1}$ and $\vec{F_b}$:

$\alpha = ( \widehat{ \vec{F_1} ; \vec{F_b} } )= ( \widehat{ \vec{OM} ; \vec{F_1} } ) - ( \widehat{ \vec{OM} ; \vec{F_2} } ) = ( \widehat{ \vec{OM} ; \vec{F_1} } ) -( \widehat{ \vec{OM} ; \vec{P_1P_2} } )$

Let’s calulate the angles:

$( \widehat{ \vec{OM} ; \vec{F_1} } ) = \Theta_1 - \frac{\pi}{2}$

And :

$( \widehat{ \vec{OM} ; \vec{P_1P_2} } ) = atan2(y_2-y_1,x_2-x_1)$

Now $\alpha$ can be calculated:

$\alpha = \Theta_1 - \frac{\pi}{2} - atan2(y_2-y_1,x_2-x_1) = \Theta_1 - atan2(x_2-x_1,y_1-y_2)$

and $F_b$ can also now be calculated:

$\| \vec{F_b} \| = \| \vec{F_1} \|.cos(\alpha)$

As it was done for $\alpha$, let’s calculate the angle $\beta$ between $\vec{F_b}$ and $\vec{F_2}$:

$\beta = ( \widehat{ \vec{F_2} ; \vec{F_b} } )= \Theta_2 - \frac{\pi}{2} - atan2(y_2-y_1,x_2-x_1) = \Theta_2 - atan2(x_2-x_1,y_1-y_2)$

The force $F_2$ can now also be calculated:

$\| \vec{F_2} \| = \| \vec{F_b} \|.cos(\beta)$

The outpout torque is :

$T_2=\| \vec{F_2} \|.l_2$

In conclusion:

$T_2= \frac{l_2}{l_1}. T_1.cos(\beta).cos(\alpha)$

where:

$\alpha = \Theta_1 - atan2(x_2-x_1,y_1-y_2)$

and

$\beta = \Theta_2 - atan2(x_2-x_1,y_1-y_2)$

# Ball and beam : model

This post deals describe the model of the ball and beam. The system consists of a long beam which can be tilted by an actuator together with a ball rolling back and forth on top of the beam.

We will make the following assumptions:

• $m$ is the mass of the ball

• $\Theta$ is the angle of the beam

• $x$ is the position of the ball on the beam

• $b$ is the friction coefficient of the ball

Based on the fundamental principle of dynamics and considering that the friction is proportional to the speed, we can write:

$m.a = m.\ddot{x} = \sum{\vec{F}} = m.g.sin(\Theta) - b.\dot{x}$

In conclusion, the differential equation of the system is:

$\ddot{x} = - \frac{b}{m}.\dot{x} + g.sin(\Theta)$

# Inverted pendulum : model without friction

## Introduction

This post presents the model of an inverted pendulum. This device is composed of an activated trolley and a pendulum which has its center of mass above its pivot point.

In this post, we will assume the following hypotheses:

• $m_1$ is the mass of the trolley.
• $m_2$ is the mass of the pendulum.
• $I_1$ is the moment of inertia of the trolley.
• $I_2$ is the moment of inertia of the pendulum.
• $L$ is the distance between the pivot and the center of mass of the pendulum.
• $\Theta$ is the angle of inclination of the pendulum.
• $x_1$ and $y_1$ are the coordinates of the center of mass $C_1$ of the trolley .
• $\Theta_1$ is the orientations of the trolley.
• $x_2$ and $y_2$ are the coordinates of the center of mass $C_2$ of the pendulum.
• $\Theta_2$ is the orientations of the pendulum.
The following notation will be used in this article to avoid too complex equations for the derivatives:

$\frac{d\Theta}{dt} = \dot{\Theta}$

And for the second derivative:

$\frac{d^2\Theta}{dt^2} = \ddot{\Theta}$

## Constraints

As the trolley and the pendulum are linked by a pivot, the following equations can be written. This equations expresses the coordinate and orientation of each body of our system. For the trolley (body 1):
$! \begin{array}{r c l} x_1 &=& x_1 \\ y_1 &=& 0 \\ \Theta_1 &=& 0 \\ \end{array}$
For the pendulum (body 2):
$! \begin{array}{r c l} x_2 &=& x_1 - L.sin(\Theta)\\ y_2 &=& y_1 + L.cos(\Theta) &=& L.cos(\Theta)\\ \Theta_2 &=& \Theta \end{array}$
Let’s define the vector $\vec{q}$, the position of our system :

$\vec{q}=\begin{pmatrix} x_1 \\ \Theta_2 \end{pmatrix}$

Let’s define the constraints on position:
$! \phi = \begin{pmatrix} y_1 \\ \Theta_1 \\ x_2 \\ y_2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ x_1 - L.sin(\Theta) \\ L.cos(\Theta) \end{pmatrix}$
Let’s calculate the constraints on velocity:
$! \dot{\phi} = \begin{pmatrix} \dot{y_1} \\ \dot{\Theta_1} \\ \dot{x_2} \\ \dot{y_2} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \dot{x_1} - L.\dot{\Theta_2}.cos(\Theta_2) \\ -L.\dot{\Theta_2}.sin(\Theta_2) \end{pmatrix}$
Let’s calculate the constraints on acceleration:
$! \ddot{\phi} = \begin{pmatrix} \dot{y_1} \\ \ddot{\Theta_1} \\ \ddot{x_2} \\ \ddot{y_2} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \ddot{x_1} - L.\ddot{\Theta_2}.cos(\Theta_2) + L.\dot{\Theta_2}^2.sin(\Theta_2) \\ -L.\ddot{\Theta_2}.sin(\Theta_2) - L.\dot{\Theta_2}^2.cos(\Theta_2) \end{pmatrix}$

## Dynamics

The fundamental principle of dynamics can be apply to our system. For the trolley (body 1):
$! \begin{array}{r c l} m_1.{a_1}^x = m_1.\ddot{x_1}&=& \| \vec{F_t} \| + \lambda_1 \\ m_1.{a_1}^y = m_1.\ddot{y_1}&=& 0 + \lambda_2 \\ I_1.\ddot{\Theta_1} &=& 0 + \lambda_3 \\ \end{array}$
For the pendulum (body 2):
$! \begin{array}{r c l} m_2.{a_2}^x = m_2.\ddot{x_2} &=& 0 + \lambda_4 \\ m_2.{a_2}^y = m_2.\ddot{y_2} &=& \| \vec{F_g} \| + \lambda_5 = -g.m_2 + \lambda_5\\ I_2.\ddot{\Theta_2} &=& 0 + \lambda_6 \\ \end{array}$
Where $\lambda_1$, $\lambda_2$,$\lambda_3$,$\lambda_4$,$\lambda_5$ and $\lambda_6$ are the Lagrange’s coefficients. These coefficients represent the interaction between bodies. In the present system, this is the interaction between the pendulum and the trolley.

## Interactions

We will now solve the system in order to eliminate the interactions between bodies ($\lambda_i$). It is proven that:

$M.\ddot{q} = F - \frac{d\phi}{dq}^\top .\Lambda$

Where :

$! M=\begin{pmatrix} m_1 & 0 \\ 0 & I_2 \end{pmatrix}$
$! \ddot{q}=\begin{pmatrix} \ddot{x_1} \\ \ddot{\Theta_2} \end{pmatrix}$
$! F=\begin{pmatrix} \| \vec{F_t} \| \\ 0 \end{pmatrix}$
$! \frac{d\phi}{dq} = \begin{pmatrix} \frac{dy_1}{dx1} & \frac{dy_1}{d\Theta_2} \\ \frac{d\Theta_1}{dx1} & \frac{d\Theta_1}{d\Theta_2} \\ \frac{dx_2}{dx1} & \frac{dx_2}{d\Theta_2} \\ \frac{dy_2}{dx1} & \frac{dy_2}{d\Theta_2} \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 1 & -L.cos(\Theta_2) \\ 0 & -L.sin(\Theta_2) \end{pmatrix}$
$! \Lambda=\begin{pmatrix} \lambda2 \\ \lambda3 \\ \lambda4 \\ \lambda5 \end{pmatrix}$

Applied to our system, this gives:

$! \begin{pmatrix} m_1 & 0 \\ 0 & I_2 \end{pmatrix} . \begin{pmatrix} \ddot{x_1} \\ \ddot{\Theta_2} \end{pmatrix} = \begin{pmatrix} \| \vec{F_t} \| \\ 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & -L.cos(\Theta_2) & -L.sin(\Theta_2) \end{pmatrix} . \begin{pmatrix} \lambda2 \\ \lambda3 \\ \lambda4 \\ \lambda5 \end{pmatrix}$

and:

$! \begin{array}{r c l} m_1.\ddot{x_1} &=& \| \vec{F_t} \| -\lambda4 \\ I_2.\ddot{\Theta_2} &=& \lambda4.L.cos(\Theta_2) + \lambda5.L.sin(\Theta_2) \end{array}$

## Solving the system

Based on the previous equations, we can formulate $\lambda_4$ and $\lambda_5$:
$! \begin{array}{r c l} \lambda_4 &=& m_2.\ddot{x_2} \\ \lambda_5 &=& m_2.\ddot{y_2} + g.m_2 \\ \end{array}$
The previous equations become:
$! \begin{array}{r c l} m_1.\ddot{x_1} &=& \| \vec{F_t} \| -m_2.\ddot{x_2} \\ I_2.\ddot{\Theta_2} &=& m_2.\ddot{x_2}.L.cos(\Theta_2) + (m_2.\ddot{y_2} + g.m_2).L.sin(\Theta_2) \end{array}$
Thanks to the previous constraints on acceleration:
$! \begin{array}{r c l} m_1.\ddot{x_1} &=& \| \vec{F_t} \| -m_2.(\ddot{x_1} - L.\ddot{\Theta_2}.cos(\Theta_2) - L.\dot{\Theta_2}^2.sin(\Theta_2) ) \\ I_2.\ddot{\Theta_2} &=& m_2.(\ddot{x_1} - L.\ddot{\Theta_2}.cos(\Theta_2) + L.\dot{\Theta_2}^2.sin(\Theta_2) ).L.cos(\Theta_2) + (m_2.(-L.\ddot{\Theta_2}.sin(\Theta_2) - L.\dot{\Theta_2}^2.cos(\Theta_2)) + g.m_2).L.sin(\Theta_2) \end{array}$
Rewriting the previous equation gives us:
$! \begin{array}{r c l} (m_1+m_2).\ddot{x_1} &-& m_2.L.cos(\Theta_2).\ddot{\Theta_2} &=& \| \vec{F_t} \| - m_2.L.\dot{\Theta_2}^2.sin(\Theta_2) \\ -m_2.L.cos(\Theta_2).\ddot{x_1} &+& (I_2+ m_2.L^2).\ddot{\Theta_2} &=& m_2.g.L.sin(\Theta_2) \end{array}$
And:
$! \begin{pmatrix} m_1+m_2 & - m_2.L.cos(\Theta_2) \\ -m_2.L.cos(\Theta_2) & (I_2+ m_2.L^2) \end{pmatrix} . \ddot{q} = \begin{pmatrix} \| \vec{F_t} \| - m_2.L.\dot{\Theta_2}^2.sin(\Theta_2) \\ m_2.g.L.sin(\Theta_2) \end{pmatrix}$
The general equation of the system is:
$! \ddot{q}=A^{-1}.B$
Where:
$! A=\begin{pmatrix} m_1+m_2 & - m_2.L.cos(\Theta_2) \\ -m_2.L.cos(\Theta_2) & (I_2+ m_2.L^2) \end{pmatrix}$
$! B= \begin{pmatrix} \| \vec{F_t} \| - m_2.L.\dot{\Theta_2}^2.sin(\Theta_2) \\ m_2.g.L.sin(\Theta_2) \end{pmatrix}$

## State space representation

Based on the previous result, it is now possible to write the state space model of the system. Let’s define $X$ the state of the system:
$! X=\begin{pmatrix} x_1 \\ \Theta_2 \\ \dot{x_1} \\ \dot{\Theta_2} \end{pmatrix}$
The state space representation is:
$! \dot{X}=\begin{pmatrix} \dot{x_1} \\ \dot{\Theta_2} \\ \left[A^{-1}.B\right] \end{pmatrix}$

## Acknowledgements

I want to thank Sébastien Lagrange from the University of Angers for his help and explanations.

# Simple pendulum

## Introduction

A simple pendulum is a weight suspended by a non-deformable wire.

In this post, we will assume the following hypotheses:

• The weight can swing freely.

• $L$ is the wire’s lenght.

• $m$ is the mass of the pendulum’s body.

• The mass of the wire is neglected.

• The frictions are neglected.

• ## Equation of the system

To find the equation of the system, we will apply the fundamental principle of dynamics:

$m.\vec{a} = \sum{\vec{F}}$

Forces acting on the system are the gravity force $\vec{F_g}$ and the wire tension force $\vec{T}$. The body’s trajectory describes an arc of circle. The body’s acceleration is tangent to this trajectory. The wire tension force is perpendicular to the body’s trajectory. Its projection along the motion axis is null. Only one force have to be considered in the following : the gravity force:

$m.a = \vec{F_m} = m.g.sin(\Theta)$

The relation between angular and linear velocity is given by :

$v=L\frac{d \Theta }{dt}$

By derivating the previous equation, we get:

$a=\frac{dv}{dt}=L. \frac{d^2 \Theta }{dt^2}$

The equation calculated from the fundamental principle of dynamics can be updated:

$m.L. \frac{d^2 \Theta }{dt^2} = m.g.sin(\Theta)$

Dividing by $m$:

$L. \frac{d^2 \Theta }{dt^2} = g.sin(\Theta)$

The differential equation of the system is:

$\frac{d^2 \Theta }{dt^2} = \frac{g}{L}.sin(\Theta)$

## Solving the equation

The previous differential equation is non linear. A commonly used approximation is to approximate $sin(\Theta)$ by $\Theta$ for small values of $\Theta$. The equation becomes linear:

$\frac{d^2 \Theta }{dt^2} = \frac{g}{L}.\Theta$

The general solution to the previous differential equation is:

$\Theta = A.cos(\omega.t + \phi)$

With
$\omega^2=\frac{g}{L}$

Given the initial conditions $\Theta=\Theta_0$ (initial angle of the pendulum), and assuming that the initial speed is null:

$A.cos(\phi)=\Theta_0$
$-A.\omega.sin(\phi)=0$


The previous system can be solved:

$A=\Theta_0$
$\phi=0$


The general solution of the system is :

$\Theta(t)=\Theta_0.cos(\omega.t)$

Where $\omega = \sqrt { \frac {g}{L} }$. The period of the motion is given by $T=2.\pi.\sqrt { \frac {g}{L}}$.

## Acknowledgements

I want to thank Jean-Pierre Pecqueur from the University of Angers for his help.