How to implement binary classification using keras

Table of Contents

  1. MLP for binary classification
  2. Binary classification operation.
  3. The code example in Keras.
  4. Video Tutorial

1. MLP for binary classification

  1. Today we are going to focus on the first classification algorithm with the topic binary classification with Keras.
  2. Binary classification is one of the most common and frequently tackled problems in the planning domain, in its simplest form, the user tries to classify an entity into one of the two possible classes. 
  3. The two classes can be arbitrarily assigned either a zero or a one for mathematical representation. But in a real scenario, they are considered as true or false. We usually take multiple independent variables to predict only one dependent variable in a neural network.

Examples:

  1. After analyzing the emails, our model can decide an email as a scam or not.
  2.  If our model gets all the medical history, it can help us to predict the patients who are at risk of heart failure. 

2 Binary classification operation:

Figure-1
  1. These are examples of multilayer Perceptron for classification, x1,x2 are inputs that are basically the independent variables. 
  2. WS are weights inputs and which will generate some results like X1 into W4 one plus X2 into W4 two-plus X3 into W four three. And it will be the input of the first note. Our first hidden layer, a biased one will also add with it all the nodes of the hidden layers will follow the same procedure in the final output layer. 
  3. There will be an activation function that will decide the solution, whether it is a regression or classification.
  4. Input X1 and X2 are the input nodes for features that represent an example. We want our neural net to learn from this W one and W two represent the weight values that we associate with the input x1 and X2 respectively, which controls the influence of each input. The Z node creates a linear function of all the inputs coming in it. 

About Dataset:

Figure-2
  1. we will use heart disease data set, it contains three thousand six hundred fifty-eight samples of total 16 variables. 
  2. We will predict 10 years, which is the sixteenth variable using the other 15 variables. First, three variables are demographic variables. Then smokers six per day are behavioral variables.
  3. Next, for our medical history and final five variables are related to risk factors. Our target variable contains only zero and one, so it’s a binary classification. 

The code example in Keras:

  1. First, we import sequential model API from Keras and we are using dense and dropout layers so we have to import them from Keras.
  2. To split our dataset we will use the Train test split function, which is available in the Sklearn Model selection.
  3.  This plot model will help us to display our model and finally we import load text from numpy, which will be used to load our data set. 
  4. we are going to work with the heart rate data set, which is available in our working directory. Let’s inspect our data set. It is a comma-separated data set. 
  5. It has a total of three thousand six hundred fifty-eight samples of total 16 variables. We use the first 15 variables to predict the last one to predict.
  6. First, we split the dataset into inputs and outputs. X contains the 15 columns from first to 15 as inputs and Y contains the output Dania column. Next, we split our total dataset into a training set and test set. 
     Using this training test split function, we set the test size 0.3, which means 70 percent data will be in the training set and 30 percent will be in the test set. 
  7. Now we construct a sequential model with the dense and dropout layers. First, we construct our dense layer with 62 neurons. As this is the first layer, we have to specify the input dimension. As our data contains 15 features, the input dimension will be 15.
  8.  We are using RELU as our activation function. The next one is another dense layer with 32 neurons. Then drop out layer with a point to drop out is a technique used to prevent the model from overfitting. 
Figure-3
  1. This drop out will reduce 20 percent inputs at the time of model training. After that, we have another dense layer with the 16 neurons and activation function relu. 
  2. Finally, we have a dense output layer with the activation function sigmoid as our target variable contains only zero and one sigmoid is the best choice.
  3. So we have one input layer, three hidden layers, and one dense output layer. Now we compile our model as this is a binary classification we will use. Binary cross entropy has lost function. We said Adam as the optimizer. It calculates an exponential moving average of the gradient and the squared gradient. We also using accuracy as a metric function. 
Figure-4
  1. It’s time to train our model with the training data set, we said Époch as hundred it means we want to train a model for 100 iterations.
  2. Now, we evaluate the model as our loss function was binary Cross_entropy and metrics was accuracy. This evaluation function will return those values.
FIgure-5

Finally, we predict our outcomes from the model. We will compare the predicted outcome with the expected outcome. We will display only the first 10 results.

Figure-6

So our model predicts the outcome, which is almost similar to the expected result.

Video Tutorial

Leave a Reply

Your email address will not be published. Required fields are marked *