# Part-3 : Controlling a Mechanical System

This is the third post in my series on control systems. In part-1 we explored what the differential equations can do for you and how to make use of a differential equation, particular the Euler-Lagrange equations to describe a mechanical system. In part-2, we derived these differential equations for our ‘pendulum on a cart’ system. We also did computer simulations, whose code is available on [github]. In this part, we are going to push ahead and try to control the cart (with its motors and wheels) so that the pendulum always stays upright.

## Differential Equation Describing the Pendulum Cart System

We recall from part-2 the differential equations describing the pendulum cart system. Also recall, $\theta$ and $x$ were our state variables.

If the damping terms (proportional to velocities with coefficients d_1 and d_2) are included, the above equations had become: If you feel lost with these equations, I highly recommend you to follow the part-1 and part-2, it is really not as complicated as it looks.

#### WxMaxima

The equations are already getting a bit complicated to manipulate by hand. If you do love your brain and it’s health, I do not recommend manipulating these equations manually. Rather there are ‘Symbolic Algebra Systems’ out there which can help you manipulate equations while you involve yourself in real thinking and do not lose your focus. I use the Maxima (wxMaxima for the GUI) system. It is an open-source (and free) symbolic algebra system. I highly recommend you do a short tutorial on maxima (will take you an hour maximum). Here are the above two differential equation in maxima’s format (these equations can also be found in the github-repo):

alpha: -sin( \theta ) / (   l * (M+m-m*sin(\theta)*sin(\theta)) );
beta: 1 / ( M+m-m*sin(\theta)*sin(\theta) )
gamma : -m*l* ('diff( \theta, t ) )^2  * cos(\theta) + m*g*sin(\theta) * cos( \theta );

# Without Damping Terms
'diff( x, t,2 ) = beta * u + beta*gamma;
'diff( \theta, t, 2 ) = -g/l*cos(\theta) - 1/l*sin(\theta) * ( alpha * u + alpha * gamma -g/l*cos(\theta) )

# With Damping Terms
'diff( x, t,2 ) = (beta * u + beta*gamma) - d1*'diff( x, t ) ;
'diff( \theta, t, 2 ) = -g/l*cos(\theta) - 1/l*sin(\theta) * ( alpha * u + alpha * gamma -g/l*cos(\theta) ) - d2*'diff(\theta, t)


#### Manipulating these Equations

Although these equations describe our pendulum+cart system fully, it is practical of no value because these are non-linear 2nd order differential equations in the state variables and we have no clue how to solve these. Even our human race has a limited understanding of these. The attentive reader might recall that in the past blog post (part-2), we described a mathematical trick to convert the 2nd order differential equation to first order (if you have forgotten this trick, just go back to the previous post, it is simple!). But still, it is a non-linear differential equation whose full understanding is still a mystery to sapiens. We sapiens have only managed to fully understand a few classes on non-linear equations, this wiki page might help you explore more on this point. In our case, we are really looking for a 1st order linear differential equations which we can deal with.

One possible way to reduce (approximate) this 1st order non-linear differential equation to 1st order linear differential equation (also referred in the literature as ODE or ordinary differential equation) is the use of Taylor Serial Expansion. This particular step is also referred to as ‘Linearization’ in the literature. All this just means we are trying to get a linear approximation of a non-linear equation.

#### Taylor Series Expansion

You might have studied Taylor series in high school and roughly know what it is. Let me put it in a different perspective. So using Taylor series lets you write any function as the sum of polynomials. For example Taylor series expansion of some common functions (of course, you do not need to remember these, you can always google, you just need to remember that any function can be written as an infinite sum of polynomials) :

So, what this means is that if you could take an infinite sum the right-hand-side expression will exactly equal the left-hand-side. But say if you take only 10 terms from the right-hand side it will be an approximation of the left-hand-side. This 10 terms approximation will be referred to as the 10th order approximation. You might ask, well 10 is too less, because like a billion term is needed since this is an infinite sum. Yes, you are correct, but when the x is near zero often time even 2 terms give a good enough approximation.

Let me prove it to you with an example, see the plot below (yes if you put equations in the google search bar it plots in). Hopefully, this convinces you that Taylor series expansion can help us get to the linear approximation we are after. The approximations to the exponential are valid near zeros on the x-axis.

In general, if you can compute derivatives of your functions you can write Taylor series approximations to your functions which are valid around a small interval near $x_0$. So for our complicated equation, in theory, we can evaluate its derivates but in practice, it is too complex for your mind. Thankfully WxMaxima is to your rescue.

I quickly did these two examples. In this a) first example, I tried to get a Taylor series expansion (only 5 terms) for $sin(x)$ which is valid around 10. In the b) second example, I tried to get Taylor series expansion for this rather complicated function 2 variable : $sin(x * cos(y) )$ around $\frac{\pi}{2}$ or 90 degrees. In general, Maxima can get you the Taylor series approximation for arbitrarily complicated N-dimensional function in seconds. No more spending painful hours to derive this. I highly recommend you to take WxMaxima for a quick spin and verify its result by doing a Taylor series expansion algebra by hand.

## Back to Square One (Linearization)

So, let’s get back to the original problem. We are after a linear approximation to our complicated non-linear equations. We can accomplish this by writing out just 1 term for Taylor series expansion for our equations around the point $\theta=\frac{\pi}{2}$, which is the pendulum up position. So, it is important to realize that this approximation is only valid when the pendulum is up and maybe a few degrees from it. I precisely did just that, see my WxMaxima screenshot below:

So, the conclusion was that our original non-linear equation can be approximated by the following linear equations. Notice that the following equations are linear in our 4-dimensional state vector:

A thing to note about such linearization. Such approximations are only valid when the matrix of linearization (aka. matrix A) has non-zero real part. This is a conclusion from “Hartman-Grobman Theorem“.

## Non-linear 2nd order differential equation –> ODE

Using the linear approximation (ie. the above 2 equation) and the trick from part-2 of this series, we can now write these equations in the form : $\dot{ \mathbf{Y} } = \mathbf{A} \mathbf{Y} + \mathbf{B} u$, where $\mathbf{Y} = [ x, \dot{x}, \theta - \frac{\pi}{2}, \dot{\theta} ]$. The $\frac{\pi}{2}$ appears because we linearized the non-linear equation at this point. Such a system is after referred in the literature as ‘linear dynamic system’. $A = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & -d1 & -\frac{mg}{M} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{d1}{ L } & \frac{ (m+M)*g }{ M*L} & -d2 \end{pmatrix}$ $B = \begin{pmatrix} 0 \\ \frac{1}{M} \\ 0 \\ \frac{-1}{ML} \end{pmatrix}$

        # Numpy
_q = (m+M) * g / (M*L)
self.A = np.array([\
[0,1,0,0], \
[0,-d1, -g*m/M,0],\
[0,0,0,1.],\
[0,d1/L,_q,-d2] ] )
self.B = np.array( [0, 1.0/M, 0., -1/(M*L)] )

So, effectively we have converted (approximated) a 2d order non-linear differential equation to first order linear differential equation (ODE). This is very prone to silly mistakes, so, double check, triple check..(I got it right in the 3rd attempt). One simple way to know for sure is evaluate $\dot{ \mathbf{Y} }$ the non-linear way. Also evaluate the same with linear approximation. These two numerical values should match around the linearization point. The equations in this post are made sure to be correct. But still if you come across any mistakes, please comment and let me know.

Pheww…!

## Essentials Mathematics from Control Systems Theory

Let’s take the time to review and summarise some results from the theory of the control systems. Some of the essentials of the theory of control system were laid by the great Maxwell, I highly recommend you to give the Wikipedia page a quick read. All of my theory comes from the youtube-series by Steve Brunton (Control Bootcamp). If possible go through the book on control system by Ogata. However, if you do really want to take up a serious study of the control system, my recommendation is to go through this blog series. To fully comprehend this blog you will need some understanding of Eigen Values and Eigen Vectors and its properties. If you think you do not have it I recommend you take up a youtube course on linear algebra by Gilbert Strang. You may complete the control bootcamp by Steve Brunton and finally, start reading the Ogata book. Don’t forget to block your calendar for 1 month for this endeavor.

Before I start enumerating some mathematical property, let me tell you the aim of this. The core point is we want to know under what conditions on A, B (our linearized matrix of the previous section) can we control the system and when is the system stable (and when unstable). If unstable how to make it stable. Once we know all this, we can forcefully make an unstable system (like a pendulum on a cart with pendulum up) into a stable system by computing “u”. So we are after an algorithm which can give us ‘u(t)’.

• Stability of Linear Dynamic System
• Controllability –“–
• Reachability –“–
• How to get ‘u’
• poleplace method by Tits and Yang.

#### Stability of Linear Dynamic System

Let’s start with a simple differential equation you might have seen in high school. $\dot{x} = a \times x$. Both ‘a’ and ‘x’ are scalars. Also remember by $\dot{x}$ I mean $\frac{dx}{dt}$. We know that a function x of time, t, $x(t) = e^{ax}$ is the solution to this differential equation. It is easy to conclude that depending on if ‘a’ is positive or negative, the solution is a rising exponential (in red) or a falling exponential (in blue). See the figure below for the plots. Since we are only worried about positive time, ‘t’ only concentrate on the right half plane in this case. The unbounded rising exponential (in red) is the unstable equilibrium and the falling exponential which converges to a specific value is the stable equilibrium.

This exact concept can be generalized to the matrix. So let’s talk, the case where x and a are matrices instead of scalars. $\dot{\mathbf{x}} = \mathbf{A} \mathbf{x}$, this is also like our simple differential equation in some sense. The solution to this equation is \mathbf{x}(t) = e^{\mathbf{A}t}. There is a definition of matrix exponential in Linear Algebra. Check out the wiki page. Similar to our simple case, there is a concept of stable and unstable in the matrix case. The positive and negative ‘a’ in the simple case turns to if $latex{\mathbf{A}}$ is positive definite (matrix with all positive Eigen values) or a negative definite matrix. It is easy to understand by the analogy why a negative definite $\mathbf{A}$ will result in a stable system. So we can know about the stability of the system by looking at the Eigen values. Note also, the Eigenvalues can be complex numbers. Please watch Steve’s brunton’s first 4 lectures to know a bit more details on this. When A is not a symmetric matrix the Eigen values will be complex numbers. In this case for stability, the real part needs to be negative. It is not very hard to conclude this yourself. Use the Euler’s representation of complex numbers.

I quickly computed the Eigenvalues of A our pendulum cart system (in pendulum up position). The Eigenvalues were: [ 0. , -3.1664429 , -0.82736931, 2.49381221]. Notice some positive Eigen values. Also from our daily experience, it is easy to see that this system is not stable (aka unstable). I also tried to compute Eigenvalues for in pendulum down position (this I did by linearizing the equation at $\theta=-\frac{\pi}{2}$ . In this case Eigen values were: array([ 0.00+0.j , -1.00+0.j , -0.25+2.78881695j, -0.25-2.788895j]) . No Eigenvalue in the pendulum down position is positive implying pendulum down is a stable system. This is obvious from our daily experience. While our human intuitions work well in such simple systems, it is hard to use our intuition and be correct for moderately complex systems. This mathematical definition can precisely tell us if the system is stable or unstable. This is the beauty of it. The code with which I got these Eigenvalues is here (github).

Controllability of Linear Dynamic Systems

Now that we have concluded that the pendulum up system is an unstable system. We wish to know if we can control this system. By control I mean, I wish to know if some ‘u’ as a function of ‘t’ can make the entire system stable (ie. Eigen values all negative). There are some tests which can tell us if a system is controllable. Readup the wiki page for controllability. Also look at video 5 to 9 in Steve Brunton’s control bootcamp youtube series for details.

1. Rank Test
2. Grammian Test
3. Popov-Belevitch-Hautus (PBH) Test

Let me try and explain the rank-test. For a linear dynamic system (when someone says this, they mean $\dot{\mathbf{x}} = \mathbf{A} \mathbf{x} + \mathbf{B} \mathbf{u}$. Here are the dimensions: i) x is nx1, ii) A is nxn iii) B is nxq, iv) u is qx1. In our pendulum cart system, n is 4 and q is 2. The rank-test says that, if the controllability matrix $\mathcal{C}$ has rank ‘n’ (same as the dimensions of x) only then the system is controllable, where, $\mathcal{C} = \left[ \mathbf{B} \ \mathbf{A}\mathbf{B} \ \mathbf{A}^2\mathbf{B} \ \cdots \ \mathbf{A}^{n-1}\mathbf{B} \right]$

Let’s give a quick try to this for our system on pendulum up and a cart system. We know the A and B here. I use python’s control systems package. There are also functions in matlab to compute this.

import numpy as np
import control #pip install control

# A, B

print np.linalg.matrix_rank( control.ctrl(A,B) )
# This prints 4 in my case. So we control this system is controllable.


There are very deep mathematical reasons why this matrix governs the controllability of a system. For now, I have just assumed this result. In the future, I might likely get into the ‘Why’. For the other two test, viz Grahmmian and PBH, I am not going to describe them here. They are simply more sophisticated ways to know which variables are controllable to what extent. Notice the rank test gives us a binary result of true (controllable) or false (not controllable).

In this section, we have concluded that by a simple rank test we can know if a linear dynamic system is controllable or not controllable. If the system is not controllable, you try wasting your time. If the system is controllable, a ‘u’ can be computed which will make the system stable.

Reachability

By Cayley-Hamilton’s theorem and some analysis (see Steve Brunton’s 11th video), it is possible to conclude that if a system is controllable than we can reach any desired physical state (ie. x) by some control input ‘u’. Note that this, however, doesn’t give as an algorithm to compute ‘u’, it merely comments on the existence of a ‘u’. Also, a minor conclusion is that there are infinitely many ‘u’ which will let us reach the desired state.

Algorithms to get ‘u’

Before we start let me bring a subtle point to your notice. In control system community the word Eigen Values are referred to as poles (or pole position). I find this terminology uncomforting and I prefer to call it Eigen values. If you are like me and more oriented with Linear Algebra, you might also prefer to call it Eigenvalues rather than poles. The linear algebra buff would know how Eigenvalues and Singular values are intricately linked (SVD or Singular Value Decomposition) and may prefer calling it Singular values instead. But anyways, what’s in the name….

We are going to look at two common methods:

A) Tits and Yang: Tits AL, Yang Y. Globally convergent algorithms for robust pole assignment by state feedback. IEEE Transactions on Automatic Control. 1996 Oct;41(10):1432-52.

import control
import numpy as np

# Given a desired eigen values, to find a stabilizing 'K'
K = control.place( A, B, [-1, -2, -4, -5] )
# prints a 2x4 matrix. It throws an error if you desire repeated eigen values.
# array([[ 59.696889,  35.88165 ,  16.23629 ,  10.16209589],
#      [ -0.77502216,  -0.431570,  -1.174332,  -0.48780318]])

# If you now check the Eigen values of matrix  A-BK, you should get [-1, -2, -3, -4].

u = -np.matmul( K, x ) #x is the current state. This has to be run repeately every few ms.



Let’s try to build a proportional controller, ie. $u = -\mathbf{K} \mathbf{x}$. So if you substitute this in the linear dynamic system, you get $\dot{x} = A x + B ( -K x )$ $\dot{x} = (A - BK) x$

We are interested in a K which can result in matrix $A-BK$ getting all negative Eigen values (for stability). This is the precise problem the method solves as described in the 1996 paper by Tits and Yang. The code to achieve the following animation is here.

https://github.com/mpkuse/inverted_pendulum/blob/master/controlled_cart_and_pendulum.py

So with the previous art, we had set the desired eigenvalues rather arbitrarily. Often times the Eigenvalues you set might not work fine (even if they are negative) and some tuning can usually lead to things to work. However, there is a mathematically sound way of computing the desired eigenvalues that are optimal in some sense.

You need to give as input the covariance matrices Q and R. Q controls how fast you want to converge and R controls the control expense. For example, if your motors are not very strong you may want to penalize aggressive motions, this can be achieved by setting R to be higher. This is just a hawk’s view of LQR, there is a lot to understand in this direction. But in a future post, I will definitely explore LQR. Here is the LQR function in python’s control toolbox for this. This finally gives out the optimally desired eigenvalues you must use for stabilization. The effect of using LQR a better-stabilized system than manually choosing eigenvalues.

For my case, the LQR gave me the desired eigen values as : [-3.175091 +0.j -2.4888322 +0.j -0.81949747+0.j -0.20177409+0.j].

## Conclusion

In this post, we learned how to use a computer algebra system (WxMaxima) for algebraic manipulation. Particularly we linearized (using Taylor series expansion) the non-linear equations that describe the motion of pendulum and cart system. We had derived these differential equations in a previous post.

We went through some essentials of the control systems theory. The idea was to get to an algorithm which can stabilize the pendulum in an inverted position. Such an inverted pendulum system is an intrinsically unstable system, but active feedback control can achieve stability.

## Exercises to Strength the Concepts

Interesting projects I can think of to strengthen understanding of this matter could be to derive the nonlinear and linear dynamics of the a) pendulum on a cart which is on an inclined plane of angle $\phi$. Yet another exercise b) Pendulum cart system on a convex surface. A convex surface will have the angle $phi$ change as a function of horizontal position. For the more involved folks out there could derive equations of c) double pendulum system. This is one pendulum supports another. Yet another interesting case could be d) a pendulum supported by a spring rather than a rigid massless rod. e) a solid mass like a cylinder or a bar could be quite a challenge as well, as you will need to also take into account moments (moment of inertia). After deriving equations of all these, be sure to check the correctness with computer simulations, similar to this blog series. That will be proof that your equations are correct.

For the electronically minded one, can build a physical system out of this. There are countless people on youtube who have built an inverted pendulum system, try to build those.