# How to implement regression using keras

1. Regression in DeepLearning
2. Code Example
3. Video Tutorial

1. Regression in DeepLearning:

1. In this blog, we are going to construct an MLP for regression analysis.
2. Regression is a very basic type of supervised learning algorithm which tries to find the best possible straight line to describe the main trend in training data.
3.  Regression analysis can help us to model the relationship between a dependent variable and one or more independent variables.
4. When the independent variables move by how much we can expect the dependent variable to move, regression models are used to predict a continuous value. Weights in regression that define how important each of the variables is for predicting the dependent variable.
5. we use the mean squared error as a loss function and the gradient descent as the optimizer.
1. This is our basic MLP model we use in regression analyses. These x1, x2 are the inputs and W’S are the weights.
2.  The output will be a summation of all X*W and adding a bias. It will pass through an activation function.
3. Suppose for regression we use linear activation function and for classification, it will be sigmoid activation function.

2 Code Example:

Dataset:

1. In this blog, we will use the Boston Housing Data Set, which is collected by the U.S. Census Service concerning housing in the area of Boston.
2. This dataset contains 506 samples of a total of fourteen features.

The goal behind our regression problem is to use these thirteen features to predict the final column. which is the price of the house.

Code in Jupyter:

1. First, we import the sequential model API from Keras. we are going to use Dense and drop out layers so we have to import them from Keras.
2. Now we load the Boston housing data set from Keras, this load data function, we load the data and split it into training and test set this.
1. Our training set contains 80 percent of the data, so we have 404 samples of 13 features in the training set and 102 samples in the test set
2. Then we normalize our training and test data. First, we feed the scalar on the training data set, which returns a scalar with the normalized mean and standard deviation of the training data. Then we call transform data to scale both the training and test set.
3. Now we generate a sequential model with dense and drop out layers. First, we construct a dense layer with 64 neurons.
4.  As this is the first layer, we have to specify the input dimension. As the data contains 13 features in the input, the input dimension will be 13. So in the first hidden layer, there will be 13 inputs and 64 outputs. We use Relu as our activation function.
5. The next one is another dense layer with 32 neurons with the same activation function value.
6. Then dropout layer with 0.2. To drop out is a technique used to prevent the model from overfitting. This dropout will reduce 20 percent input at the time of model training.
7.  After that, we have another dense layer with 16 neurons. Finally, we have a dense output layer. We are taking the default activation function, which is linear.
8. Now we compile our regression model, we use MSE, which is the average squared error. We use the Adam optimizer, which calculates an exponential moving average of the gradient and the squared gradient and parameters control the decay rates of its moving averages.
9. We are taking mean absolute error as a matric. The Matric has nothing to do with the model training. It is just a User-Friendly value that is easier to evaluate.