# Support vector regression scikit learn tutorial

1. SVR or Support Vector Regressor
2. Concept
3. Example
4. Video Tutorial

1 Support Vector Regressor or SVR

In this blog, we’ll be looking at one of the algorithms used for regression purposes, a support vector regression, or simply SVR. Now we will be discussing what is support vector regressor and the concept that how a support vector machine or a support vector layer can be used as a regressor, followed by an example. Now the main feature of how to support vector machines work is by keeping a hyperplane with a maximum margin.

2 Concept

For example If I have one classification problem,

the support vector machines makes a hyperplane or the main line of division classifying the area above the line and the area below the line. If that part is considered as the hyperplane and the maximum margin error is dotted line that we can see. If I have a linear distribution of data it will be plotted according to the hyperplane followed by the margin that gives the regressor. If I have a nonlinear plane, hyperplane will change according to that.

3 Example

Lets take an example of predicting salaries using SVR. So, import the necessary libraries.

The positions, the level of  experience or the years are the features. These features are the columns of X. The salary value that we have to predict is taken as y.

We use scaling function to scale the fetures or columns on both X and Y

Now we call support vector machines module, and from the module we call support vector regression.

We fit the X and Y into the model. Kernel by default is the RPF, but we have another different kind of kernels and we can use a different kinds of kernels according to the data that fits best. lets’ look at the examples a little later. Now we predict the y values.

The above code snippet is used to plot the graph.

As we can see this is the best fit for the values of distribution we have. So, taking the kernel by default ‘rbf’ best fits the data given. Now let’s see what happens when the kernel is different.

1 let’s take the kernel value as ‘linear’

The linear value doesn’t fit the data we have.

2. lets try the ‘polynomial’ value.

So, by this we can understand the ‘rbf’ is the suitable kernel for our data. That is when the results we got are closer to actual.

So, this is all about Support Vector Regressor.