## Argument handling in MATLAB functions – using default values

When writing functions in MATLAB sometimes the function requires a number of arguments some of which can have default values. So when the arguments are given, they overwrite the default arguments. Consider the following example:

function c = myfcn(a,b)
if nargin < 2
b = 2;
end
c = a+b;
end


First of all, we can determine the number of arguments given when the function was executed. based on the number of arguments, we can then decide whether use the default value for b or use the given value. In this case: myfcn(2,3) results in 5 but myfcn(2) results in 4.

This is good, but the problem begins when the number of arguments increases. Consider the function has 8 arguments. In order to decide whether using the default or the given values, you should check which arguments are given and which are not. In other words, you might need to write an if for different conditions, for instance:

function k = myfcn(a,b,c,d,e,f,g,h)
if nargin < 8; h=0.2; end
if nargin < 7; g=0.3; end
if nargin < 6; f=1; end
if nargin < 5; e=0; end
...
end


You can see this is ugly and not professional. It also prevents you from providing the function with a specific set of parameters, say you want new values for d and f and want to use the default values for the rest of the arguments.

The best way, I know and have used for a long time is using the arguments as a cell list. Check the following example:

function y = myfcn(n,varargin)
defaultValues = {2 20 150 0.85};    % default values {x1 x2 x3 x4};
idx = ~cellfun(@isempty,varargin);  % find which parameters have changed
defaultValues(idx) = varargin(idx); % replace the changed ones
[x1,x2,x3,x4] = defaultValues{:};   % convert from cell to var
y = (x1+x2*x3+x4)^(1/n);
end


Test:

>> myfcn(1)

ans =

3.0028e+03

>> myfcn(1,2)

ans =

3.0028e+03

>> myfcn(1,[],2)

ans =

302.8500

>> myfcn(1,[],[],2)

ans =

42.8500

>> myfcn(1,[],[],2)

ans =

42.8500

>> myfcn(1,[],[],2,2)

ans =

44


## Iterative Learning Control – Part 2

In the previous post I explained the ILC controllers and compared them to the conventional feedback controllers. In this post I show a simple example that employs an ILC.

Check the following simple, linear plant:

$y(t+1)= -0.7*y(t) - 0.12*y(t-1) + u(t)$
$y(0)=2$
$y(1)=2$

In this system, $u(t)$ is the input at time $t$, and $y(t)$ is the output of the system at time $t$. The initial condition of the system also is represented with two simple equations.

We want to force the system to follow a square wave such as:

$y_d(t)=\begin{cases} 2 & 0

The ILC algorithm works as follows:

1. consider $y_d$ and the initial input (first iteration, $k=0$)
2. run the system with this input and keep the result ($y_0$)
3. compute the error as $latex $e_0(t)=y_d(t)-y_0(t)$ 4. compute the next input using the previous result as: $u_1(t)=u_0(t)+K_p*e_0(t+1)$ 5. use $u_1(t)$ and jump to 2. By implementing this simple algorithm we can get the following result. For this result I have chosen $K_p=0.5$. | | Leave a comment ## Iterative Learning Control – Part 1 Iterative Learning Control (ILC) is used for improving the transient response in systems that perform repetitively. ILC tries to improve the response by adjusting the input to the plant based on the error observed in the previous iteration. A conventional controller $C$ can be written as follows: $u_{k+1}(t) = C[e_{k+1}(t-1)]$ where $k$ shows the number of iterations, $t$ denotes time, $u$ is the input to the controller, and $e$ is the error between the desired and the actual goal. It can be seen that the error from time $t-1$ has been used to make an output from the controller, and the output is being used at time $t$. The controller can be simply a constant gain such as: $u_{k+1}(t) = K_p*e_{k+1}(t-1)$ The ILC controller, on the other hand, uses the error in time $t+1$ to make inputs at time $t$. This can be possible only through iteration. So that the error at time $t+1$ was observed from the previous iteration, $k$ and will affect the next iteration, $k+1$. $u_{k+1}(t) = u_{k}(t) + C[e_{k+1}(t+1)]$ again with a simple controller we can have: $u_{k+1}(t) = u_{k}(t) + K_p * e_{k+1}(t+1)$ See an example in the next post. | | Leave a comment ## Probability – Basic Lesson 4 In the previous lesson, we saw the definition of Expectation. Here the definition of Variance (and Standard Deviation) is given. Variance means that how much variability is in $f(x)$ around its mean value (expected value) $E[f(x)]$, and it can be written as follows: $var[f] = E[(f(x) - E[f(x)])^2]$ and it can be simplified as, $var[f] = E[f(x)^2] - E[f(x)]^2$ Example: Let’s consider the previous example. You are playing a dice game, you roll the dice and you earn$2 for a 1, $1 for a 6, and you lose$1 otherwise. We learned that the Expectation was $-1/6 . Now we can calculate the variance as follows: $var[x] = E[x^2] - E[x]^2$ $= \sum_x x^2 Pr(x) - E[x]^2$ $= (2^2).(1/6) + (1^2).(1/6) + (-1^2).(1/6) - (-1/6)$ $= 53/36$ You should also know that the Standard Deviation is defined as $std = \sqrt{var}$ So in this example, $std = \sqrt{53}/6$ Posted in Machine Learning, Statistics, Thoughts | | Leave a comment ## Probability – Basic Lesson 3 Here I give an example for calculating Expectation and Variance for discrete random variables. The Expectation of a variable $x$ is the average value of a function $f(x)$ under a probability $Pr(x)$ and can be written as follows: $E[f(x)] = \sum_x f(x)Pr(x)$ in a continuous representation the Expectation is written as follows: $E[f(x)] = \int Pr(x)f(x)dx$ If we sample a finite number $N$ points from a probability density function the expectation can be written as: $E[f(x)] = 1/N \sum_{n=1}^N f(x_n)$ Check out the next lesson for Variance and standard deviation. Example: Consider you roll a dice, if the result is 1 you get$2, if 6 you get $1 and otherwise you lose$1. What is the expectation if you play many times?

we can use the first formula, since this is  a discretized case.
$E[f] = \sum_x x Pr(x) = 2.(1/6) + 1.(1/6) - 1.(4/6) = -1/6$

It means that you lose $\-1/6$

Recall that each value has been multiplied with its corresponding probability, for instance we get \$2 if we get a 1, which has the probability of 1/6.

Check out the next lesson for Variance and standard deviation.

## Probability – Basic Lesson 2

Check out my previous post on the four main rules of probability first and come back to this post to see a very nice example.

Example:

Consider we have two boxes a blue and a green, there are a number of marbles in each box. Let’s say in the green box there are two black marbles and six red marbles. Also in the blue box there are one red marble and three black marbles.

If the probability of selecting the green box is 0.4, calculate all the marginal probabilities.

Let’s consider the boxes as a probability variable $B$, and the marbles as another probability variable, $M$.
The probability of selecting the green box is 0.4, it means that $Pr(B=green)=0.4$. We can calculate the $Pr(B=blue)$ as follows:
$Pr(B=blue) = 1 - Pr(B=green) = 1 - 0.4 = 0.6$

If we select the blue box (including one red and three black marbles), the probability of selecting the red marble is:
$Pr(M=red | B=blue) = 1/4$

And then the probability of selecting a black marble from the blue box is
$Pr(M = black | B = blue) = 1 - 1/4 = 3/4$

If we select the green box (including two black and six red marbles), the probability of selecting the red marble is:
$Pr(M = red | B = green) = 6/8 = 3/4$

And then the probability of selecting a black marble from the blue box is
$Pr(M = black | B = green) = 1 - 3/4 = 1/4$

The overall probability of choosing a red marble can be calculated using the sum and the product rules as follows:

$Pr(M=red) = \\ Pr(M=red| B=green) Pr(B=green) + Pr(M=red | B=blue) \\ = 3/4 * 0.4 + 1/4 * 0.6 = 9/20$

Then the probability of choosing a black marble is
$Pr(M=black)= 1-9/20 = 11/20$

Reversing the conditional probability we can calculate other probabilities as well, for instance, $Pr(B = green | M = red)$ can be calculated using the Bayes theorem as follows:

$Pr(B = green | M = red) Pr( M = red) = Pr(M = red | B = green) Pr(B = green)$

$Pr(B = green | M = red) = 3/4 * 4/10 * 10/6 = 1/2$

## Probability – Basic Lesson 1

Main rules of probability that make life easier.

Consider two random variables $X$ and $Y$, we denote the probability of X occurring as $Pr(X)$. Also $Pr(X|Y)$ means that the probability of $X$ given $Y$ has happened and it is called the marginal probability. If the variables are dependent then their probabilities are joint and we can define a joint probability as $Pr(X,Y)$. Four main rules of probability are as follows:

1- The joint probability is symmetric:
$Pr(X,Y) = Pr(Y,X)$

2- The sum rule says:
$Pr(X) = \sum_Y Pr(X,Y)$

3- The product rule says:
$Pr(X,Y) = Pr(Y|X)Pr(X)$

4- using rule-1 and rule-3 we can find the Bayes’ theorem:
$Pr(X|Y)Pr(Y) = Pr(Y|X)Pr(X)$

Check the next post for a simple example that uses all four rules here.