Regression is a supervised learning.

The green curve is the true function (which is not a polynomial)

We may use a loss functions that measures squared error in the prediction of y(x) from x.

From Bishop's book on machine learning

Given an input x compute an output y

For example:

- Predict height from age

- Predict house price from house area

- Predict distance from wall from sensors

We look at a example of training sample 15 house from the region.

To find the values for the coefficient which minimize the objective functions we take the partial derivates of the objective function (SSE) with respect to the coefficients. Set these to 0, and solve.

**A Simple Example: Fitting a Polynomial**The green curve is the true function (which is not a polynomial)

We may use a loss functions that measures squared error in the prediction of y(x) from x.

From Bishop's book on machine learning

**Types of Regression Models:-****Linear regression :-**Given an input x compute an output y

For example:

- Predict height from age

- Predict house price from house area

- Predict distance from wall from sensors

**Linear Regression Model**

Relationship Between Variables Is a Linear Function

We look at a example of training sample 15 house from the region.

**The regression line**

The least-squares regression line is the unique line such that the sum of the squared vertical (y) distances between the data points and the line is the smallest possible.

**How do we "learn" parameters**

For the 2-d problem

**Multiple Linear Regression**

There is a closed form which requires matrix inversion, etc.

There are iterative techniques to find weights

- delta rule (also called LMS method) which will update towards the objective of minimizing the SSE.

**LMS Algorithm :-**

Start a search algorithm (e.g. gradient descent algorithm,) with initial guess to đ˝.

Repeatedly update đ to make j(đ˝) smaller, until it converges to minima.

J is a convex quadratic function, so has a single global minima.gradient descent eventually converges at the global minima.
At each iteration this algorithm takes a step in the direction of steepest descent (-ve direction of gradient).

This comment has been removed by a blog administrator.

ReplyDelete