Trending Technology Machine Learning, Artificial Intelligent, Block Chain, IoT, DevOps, Data Science

Recent Post

Codecademy Code Foundations

Search This Blog

Supervised Learning | Classification | Regression | Features | Supervised Machine Learning

Given :
- A set of input features X1,.......,Xn
- A target feature Y
- A set of training example where the value for the input feature and the target features are given  for each example
- A new example, where only the values for the input features are given

Predict the values for the target features for the new example.
 - Classification when Y is discrete
 - Regression when Y is continuous

In supervised learning you have a set of input feature x1, x2,......xn. these are the feature with respect of which you describe the instance and you have target feature Y..

 Here Y is divide two parts
 discrete — Classification
 Continuous — Regression

Classification :-
  Some example of classification-

Regression :-
Example of regression problem, we want to find out the price of used car and you certain attribute of car to predicate it price.  

Often, the individual observations are analyzed into a set of quantifiable properties which are called features. May be
 - Categorical (e.g. "A", "B", "AB" or "O", for blood type)
 - Ordinal (e.g. "large", "medium", or "small")
 - Integer-valued (e.g. the number of words in a text)
 - Real-valued (e.g. height)

Here, shows training example and this training example we have five feature Action, Author, Thread, Length, Where and different rows are the different instances e1, e2,............, e8.

In supervised learning we have the training set the learning algorithm uses the training set to come up with model or hypothesis will introduce to this more detail and testing phase given a new instance use the hypothesis to predicts the value of y.

In above diagram in training phase we get the input on the level from the input to extract the feature of the input and feeded to the machine learning. similarly In the testing phase given the input use a feature extractor to extract the feature and we feeded to the classifier model to get level.

Classification Learning
Task T:
   - input: a set of instance  d1,......,dn
               an instance has a set of features
               we can represent an instance as a vector d=<x1,......,xn>
   - output: a set of predictions ^y1,......,^yn
         one of a fixed set of constant values :
            - (+1, -1) or {cancer, healthy}, or {rose, hibiscus, jasmine, ......}; or ......
Performance metric P:
 Probability (wrong prediction) on example from D
Experience E:
    a set of labeled examples (x,y) where y is the true label for x
    ideally, example should be sampled from some fixed distribution D.

In above figure, How to get data for the learning problem.

Hypothesis Space
  One way to think about a supervised learning machine is as a device that explores a "hypothesis space".
  - Each setting of the parameters in the machine is a different hypothesis about the function that maps input vectors to output vectors.

  Concept c: Subset of objects from X (c is unknown).
  Target Function f: Maps each instance x ε X to target label y ε Y
  Example (x,y): Instance x with label y=f(x).
  Training Data s: Collection of examples observed by learning algorithm.
     Used to discover potentially predictive relationships.

No comments:

Post a Comment

Popular Articles