Iterative Learning Control – Part 2

In the previous post I explained the ILC controllers and compared them to the conventional feedback controllers. In this post I show a simple example that employs an ILC.

Check the following simple, linear plant:

y(t+1)= -0.7*y(t) - 0.12*y(t-1) + u(t)
y(0)=2
y(1)=2

In this system, u(t) is the input at time t, and y(t) is the output of the system at time t. The initial condition of the system also is represented with two simple equations.

We want to force the system to follow a square wave such as:

y_d(t)=\begin{cases} 2 & 0<t<10 \\ 4 & 11<t<20 \\ 2 & 21<t<30 \end{cases}

The ILC algorithm works as follows:

  1. consider y_d and the initial input (first iteration, k=0)
  2. run the system with this input and keep the result (y_0)
  3. compute the error as $latex e_0(t)=y_d(t)-y_0(t)
  4. compute the next input using the previous result as: u_1(t)=u_0(t)+K_p*e_0(t+1)
  5. use u_1(t) and jump to 2.

By implementing this simple algorithm we can get the following result. For this result I have chosen K_p=0.5.

ilc

Posted in control, Machine Learning, MATLAB, Optimization, programming, Robotics, Software | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Iterative Learning Control – Part 1

Iterative Learning Control (ILC) is used for improving the transient response in systems that perform repetitively. ILC tries to improve the response by adjusting the input to the plant based on the error observed in the previous iteration.

A conventional controller C can be written as follows:

u_{k+1}(t) = C[e_{k+1}(t-1)]

where k shows the number of iterations, t denotes time, u is the input to the controller, and e is the error between the desired and the actual goal. It can be seen that the error from time t-1 has been used to make an output from the controller, and the output is being used at time t. The controller can be simply a constant gain such as:

u_{k+1}(t) = K_p*e_{k+1}(t-1)

The ILC controller, on the other hand, uses the error in time t+1 to make inputs at time t. This can be possible only through iteration. So that the error at time t+1 was observed from the previous iteration, k and will affect the next iteration, k+1.

u_{k+1}(t) = u_{k}(t) + C[e_{k+1}(t+1)]

again with a simple controller we can have:

u_{k+1}(t) = u_{k}(t) + K_p * e_{k+1}(t+1)

See an example in the next post.

 

Posted in control, Machine Learning, MATLAB, Optimization, programming, Robotics, Software | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Probability – Basic Lesson 4

In the previous lesson, we saw the definition of Expectation. Here the definition of Variance (and Standard Deviation) is given.

Variance means that how much variability is in f(x) around its mean value (expected value) E[f(x)], and it can be written as follows:

var[f] = E[(f(x) - E[f(x)])^2]

and it can be simplified as,

var[f] = E[f(x)^2] - E[f(x)]^2


Example:

Let’s consider the previous example. You are playing a dice game, you roll the dice and you earn $2 for a 1, $1 for a 6, and you lose $1 otherwise. We learned that the Expectation was $-1/6 . Now we can calculate the variance as follows:

var[x] = E[x^2] - E[x]^2
= \sum_x x^2 Pr(x) - E[x]^2
= (2^2).(1/6) + (1^2).(1/6) + (-1^2).(1/6) - (-1/6)
= 53/36

You should also know that the Standard Deviation is defined as

std = \sqrt{var}

So in this example,

std = \sqrt{53}/6

 

 

Posted in Machine Learning, Statistics, Thoughts | Tagged , , , , , , , , , | Leave a comment

Probability – Basic Lesson 3

Here I give an example for calculating Expectation and Variance for discrete random variables.

The Expectation of a variable x is the average value of a function f(x) under a probability Pr(x) and can be written as follows:

E[f(x)] = \sum_x f(x)Pr(x)

in a continuous representation the Expectation is written as follows:

E[f(x)] = \int Pr(x)f(x)dx

If we sample a finite number N points from a probability density function the expectation can be written as:

E[f(x)] = 1/N \sum_{n=1}^N f(x_n)

Check out the next lesson for Variance and standard deviation.


Example:

Consider you roll a dice, if the result is 1 you get $2, if 6 you get $1 and otherwise you lose $1. What is the expectation if you play many times?

we can use the first formula, since this is  a discretized case.
E[f] = \sum_x x Pr(x) = 2.(1/6) + 1.(1/6) - 1.(4/6) = -1/6

It means that you lose \$-1/6

Recall that each value has been multiplied with its corresponding probability, for instance we get $2 if we get a 1, which has the probability of 1/6.

 

Check out the next lesson for Variance and standard deviation.

Posted in Machine Learning, Statistics, Thoughts | Tagged , , , , , , , , , | Leave a comment

Probability – Basic Lesson 2

Check out my previous post on the four main rules of probability first and come back to this post to see a very nice example.

Example:

Consider we have two boxes a blue and a green, there are a number of marbles in each box. Let’s say in the green box there are two black marbles and six red marbles. Also in the blue box there are one red marble and three black marbles.

If the probability of selecting the green box is 0.4, calculate all the marginal probabilities.

Let’s consider the boxes as a probability variable B, and the marbles as another probability variable, M.
The probability of selecting the green box is 0.4, it means that Pr(B=green)=0.4. We can calculate the Pr(B=blue) as follows:
Pr(B=blue) = 1 - Pr(B=green) = 1 - 0.4 = 0.6

If we select the blue box (including one red and three black marbles), the probability of selecting the red marble is:
Pr(M=red | B=blue) = 1/4

And then the probability of selecting a black marble from the blue box is
Pr(M = black | B = blue) = 1 - 1/4 = 3/4

If we select the green box (including two black and six red marbles), the probability of selecting the red marble is:
Pr(M = red | B = green) = 6/8 = 3/4

And then the probability of selecting a black marble from the blue box is
Pr(M = black | B = green) = 1 - 3/4 = 1/4

The overall probability of choosing a red marble can be calculated using the sum and the product rules as follows:

Pr(M=red) = \\ Pr(M=red| B=green) Pr(B=green) + Pr(M=red | B=blue) \\ = 3/4 * 0.4 + 1/4 * 0.6 = 9/20

Then the probability of choosing a black marble is
Pr(M=black)= 1-9/20 = 11/20

Reversing the conditional probability we can calculate other probabilities as well, for instance, Pr(B = green | M = red) can be calculated using the Bayes theorem as follows:

Pr(B = green | M = red)  Pr( M = red) = Pr(M = red | B = green) Pr(B = green)

Pr(B = green | M = red) = 3/4 * 4/10 * 10/6 = 1/2

 

 

Posted in Machine Learning, Statistics, Thoughts | Tagged , , , , , , , , , , | Leave a comment

Probability – Basic Lesson 1

Main rules of probability that make life easier.

Consider two random variables X and Y, we denote the probability of X occurring as Pr(X). Also Pr(X|Y) means that the probability of X given Y has happened and it is called the marginal probability. If the variables are dependent then their probabilities are joint and we can define a joint probability as Pr(X,Y). Four main rules of probability are as follows:

1- The joint probability is symmetric:
Pr(X,Y) = Pr(Y,X)

2- The sum rule says:
Pr(X) = \sum_Y Pr(X,Y)

3- The product rule says:
Pr(X,Y) = Pr(Y|X)Pr(X)

4- using rule-1 and rule-3 we can find the Bayes’ theorem:
Pr(X|Y)Pr(Y) = Pr(Y|X)Pr(X)

Check the next post for a simple example that uses all four rules here.

 

Posted in Machine Learning, Statistics, Thoughts | Tagged , , , , , , , , , , , , , , | Leave a comment

Working with PATHs in MATLAB

How to find the search path list inside MATLAB?

All the saved and default paths that MATLAB looks inside to find a function are stored in an m-file called pathdef.m. You can easily use the which command to do this:

which pathdef.m -all

The result would be something like this:

C:\Program Files\MATLAB\R2016a\toolbox\local\pathdef.m

If you look inside this file, at the end you can see userpath.
This parameter stores the path to your workspace, for instance:

C:\Users\UserName\Documents\MATLAB;

How to add a path to the search path temporarily?

To do this you can add each folder using the command addpath
This command will add the path to the search list and as soon as you close MATLAB it will be removed from the list.

addpath('/img/new/folders');
addpath('win\def\optim\);

How to add a path to the search path permanently?

You can use savepath command to save the path to the pathdef.m file. This change would be permanently and would not be removed from the search list even if you exit MATLAB.

savepath(c:\Program Files\MyToolbox);

 

 

Posted in Linux, MATLAB, programming, Software, Ubuntu | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment