## Probability – Basic Lesson 4

In the previous lesson, we saw the definition of Expectation. Here the definition of Variance (and Standard Deviation) is given.

Variance means that how much variability is in $f(x)$ around its mean value (expected value) $E[f(x)]$, and it can be written as follows:

$var[f] = E[(f(x) - E[f(x)])^2]$

and it can be simplified as,

$var[f] = E[f(x)^2] - E[f(x)]^2$

Example:

Let’s consider the previous example. You are playing a dice game, you roll the dice and you earn $2 for a 1,$1 for a 6, and you lose $1 otherwise. We learned that the Expectation was$-1/6 . Now we can calculate the variance as follows:

$var[x] = E[x^2] - E[x]^2$
$= \sum_x x^2 Pr(x) - E[x]^2$
$= (2^2).(1/6) + (1^2).(1/6) + (-1^2).(1/6) - (-1/6)$
$= 53/36$

You should also know that the Standard Deviation is defined as

$std = \sqrt{var}$

So in this example,

$std = \sqrt{53}/6$

Posted in Machine Learning, Statistics, Thoughts | | Leave a comment

## Probability – Basic Lesson 3

Here I give an example for calculating Expectation and Variance for discrete random variables.

The Expectation of a variable $x$ is the average value of a function $f(x)$ under a probability $Pr(x)$ and can be written as follows:

$E[f(x)] = \sum_x f(x)Pr(x)$

in a continuous representation the Expectation is written as follows:

$E[f(x)] = \int Pr(x)f(x)dx$

If we sample a finite number $N$ points from a probability density function the expectation can be written as:

$E[f(x)] = 1/N \sum_{n=1}^N f(x_n)$

Check out the next lesson for Variance and standard deviation.

Example:

Consider you roll a dice, if the result is 1 you get $2, if 6 you get$1 and otherwise you lose $1. What is the expectation if you play many times? we can use the first formula, since this is a discretized case. $E[f] = \sum_x x Pr(x) = 2.(1/6) + 1.(1/6) - 1.(4/6) = -1/6$ It means that you lose $\-1/6$ Recall that each value has been multiplied with its corresponding probability, for instance we get$2 if we get a 1, which has the probability of 1/6.

Check out the next lesson for Variance and standard deviation.

Posted in Machine Learning, Statistics, Thoughts | | Leave a comment

## Probability – Basic Lesson 2

Check out my previous post on the four main rules of probability first and come back to this post to see a very nice example.

Example:

Consider we have two boxes a blue and a green, there are a number of marbles in each box. Let’s say in the green box there are two black marbles and six red marbles. Also in the blue box there are one red marble and three black marbles.

If the probability of selecting the green box is 0.4, calculate all the marginal probabilities.

Let’s consider the boxes as a probability variable $B$, and the marbles as another probability variable, $M$.
The probability of selecting the green box is 0.4, it means that $Pr(B=green)=0.4$. We can calculate the $Pr(B=blue)$ as follows:
$Pr(B=blue) = 1 - Pr(B=green) = 1 - 0.4 = 0.6$

If we select the blue box (including one red and three black marbles), the probability of selecting the red marble is:
$Pr(M=red | B=blue) = 1/4$

And then the probability of selecting a black marble from the blue box is
$Pr(M = black | B = blue) = 1 - 1/4 = 3/4$

If we select the green box (including two black and six red marbles), the probability of selecting the red marble is:
$Pr(M = red | B = green) = 6/8 = 3/4$

And then the probability of selecting a black marble from the blue box is
$Pr(M = black | B = green) = 1 - 3/4 = 1/4$

The overall probability of choosing a red marble can be calculated using the sum and the product rules as follows:

$Pr(M=red) = \\ Pr(M=red| B=green) Pr(B=green) + Pr(M=red | B=blue) \\ = 3/4 * 0.4 + 1/4 * 0.6 = 9/20$

Then the probability of choosing a black marble is
$Pr(M=black)= 1-9/20 = 11/20$

Reversing the conditional probability we can calculate other probabilities as well, for instance, $Pr(B = green | M = red)$ can be calculated using the Bayes theorem as follows:

$Pr(B = green | M = red) Pr( M = red) = Pr(M = red | B = green) Pr(B = green)$

$Pr(B = green | M = red) = 3/4 * 4/10 * 10/6 = 1/2$

## Probability – Basic Lesson 1

Main rules of probability that make life easier.

Consider two random variables $X$ and $Y$, we denote the probability of X occurring as $Pr(X)$. Also $Pr(X|Y)$ means that the probability of $X$ given $Y$ has happened and it is called the marginal probability. If the variables are dependent then their probabilities are joint and we can define a joint probability as $Pr(X,Y)$. Four main rules of probability are as follows:

1- The joint probability is symmetric:
$Pr(X,Y) = Pr(Y,X)$

2- The sum rule says:
$Pr(X) = \sum_Y Pr(X,Y)$

3- The product rule says:
$Pr(X,Y) = Pr(Y|X)Pr(X)$

4- using rule-1 and rule-3 we can find the Bayes’ theorem:
$Pr(X|Y)Pr(Y) = Pr(Y|X)Pr(X)$

Check the next post for a simple example that uses all four rules here.

Posted in Machine Learning, Statistics, Thoughts | | Leave a comment

## How to find the search path list inside MATLAB?

All the saved and default paths that MATLAB looks inside to find a function are stored in an m-file called pathdef.m. You can easily use the which command to do this:

which pathdef.m -all


The result would be something like this:

C:\Program Files\MATLAB\R2016a\toolbox\local\pathdef.m


If you look inside this file, at the end you can see userpath. This parameter stores the path to your workspace, for instance:

C:\Users\UserName\Documents\MATLAB;


## How to add a path to the search path temporarily?

To do this you can add each folder using the command addpath
This command will add the path to the search list and as soon as you close MATLAB it will be removed from the list.

addpath('/img/new/folders');
addpath('win\def\optim\);


## How to add a path to the search path permanently?

You can use savepath command to save the path to the pathdef.m file. This change would be permanently and would not be removed from the search list even if you exit MATLAB.

savepath(c:\Program Files\MyToolbox);


Posted in Linux, MATLAB, programming, Software, Ubuntu | | Leave a comment

## How to #interpolate using #splines: A simple #MATLAB #tutorial for beginners

In this video I will show how you can use curve fitting functions provided by MATLAB to interpolate data.
First, I make some datapoints and plot them. Then I use the function ‘spapi’ (spline interpolation) only with 2 knots to make linear piece-wise interpolation. The I use ‘csapi’ to make a smooth and precise interpolation. Then I will show by using ‘csape’ with end condition we can get better results.

Posted in Linux, Machine Learning, MATLAB, programming, Ubuntu | | Leave a comment

## #Optimization with #Simulated #Annealing – A #MATLAB tutorial for beginners

In this tutorial I will show how to use Simulated Annealing for minimizing the Booth’s test function. Simulated Annealing is one of the most famous optimization algorithms that has been also implemented in MATLAB as a built-in function. The Booth’s test function is a famous test function for evaluating single objective optimization algorithms. It has two inputs ans one output. It is also bounded in range [-10 10] and the minimum is at [1 3]. By knowing the minimum point we can test the algorithm.

In this tutorial, I show implementation of the Booth’s single-objective test problem and optimize it using the built-in Simulated Annealing in MATLAB. The given objective function is a standard test function that helps a beginner user to understand the basic concept of optimization in MATLAB easier. The given objective function or fitness function has one vector input including ‘n=2’ variables and one output (objective values). I write two separate functions one for the fitness function and one for the main algorithm. I plot the best value function that illustrates the best value for the obtained solutions in a proper way. We use different setting of the algorithm using the ‘optimoptions’ function.

optimizing multi-objective ZDT1 test problem with 30 variables using Genetic Algorithm:

optimizing multi-objective ZDT1 test problem using Genetic Algorithm:

A simple optimization using Genetic Algorithm:

A simple constrained optimization using Genetic Algorithm:

A simple multi-objective optimization using Genetic Algorithm:

A mixed-integer optimization using Linear Programming:

A simple single-objective optimization using Particle Swarm Optimization Algorithm:

A simple single-objective optimization using Pattern Search:

| | Leave a comment