**Table of Contents**

- MLP for binary classification
- Binary classification operation.
- The code example in Keras.
- Video Tutorial

**1. MLP for binary classification**

- Today we are going to focus on the first classification algorithm with the topic binary classification with Keras.
- Binary classification is one of the most common and frequently tackled problems in the planning domain, in its simplest form, the user tries to classify an entity into one of the two possible classes.
- The two classes can be arbitrarily assigned either a zero or a one for mathematical representation. But in a real scenario, they are considered as true or false. We usually take multiple independent variables to predict only one dependent variable in a neural network.

**Examples:**

- After analyzing the emails, our model can decide an email as a scam or not.
- If our model gets all the medical history, it can help us to predict the patients who are at risk of heart failure.

**2 Binary classification operation:**

- These are examples of multilayer Perceptron for classification, x1,x2 are inputs that are basically the independent variables.
- WS are weights inputs and which will generate some results like X1 into W4 one plus X2 into W4 two-plus X3 into W four three. And it will be the input of the first note. Our first hidden layer, a biased one will also add with it all the nodes of the hidden layers will follow the same procedure in the final output layer.
- There will be an activation function that will decide the solution, whether it is a regression or classification.
- Input X1 and X2 are the input nodes for features that represent an example. We want our neural net to learn from this W one and W two represent the weight values that we associate with the input x1 and X2 respectively, which controls the influence of each input. The Z node creates a linear function of all the inputs coming in it.

**About Dataset:**

- we will use heart disease data set, it contains three thousand six hundred fifty-eight samples of total 16 variables.
- We will predict 10 years, which is the sixteenth variable using the other 15 variables. First, three variables are demographic variables. Then smokers six per day are behavioral variables.
- Next, for our medical history and final five variables are related to risk factors. Our target variable contains only zero and one, so it’s a binary classification.

**The code example in Keras:**

- First, we import sequential model API from Keras and we are using dense and dropout layers so we have to import them from Keras.
- To split our dataset we will use the Train test split function, which is available in the Sklearn Model selection.
- This plot model will help us to display our model and finally we import load text from numpy, which will be used to load our data set.
- we are going to work with the heart rate data set, which is available in our working directory. Let’s inspect our data set. It is a comma-separated data set.
- It has a total of three thousand six hundred fifty-eight samples of total 16 variables. We use the first 15 variables to predict the last one to predict.
- First, we split the dataset into inputs and outputs. X contains the 15 columns from first to 15 as inputs and Y contains the output Dania column. Next, we split our total dataset into a training set and test set.

Using this training test split function, we set the test size 0.3, which means 70 percent data will be in the training set and 30 percent will be in the test set. - Now we construct a sequential model with the dense and dropout layers. First, we construct our dense layer with 62 neurons. As this is the first layer, we have to specify the input dimension. As our data contains 15 features, the input dimension will be 15.
- We are using RELU as our activation function. The next one is another dense layer with 32 neurons. Then drop out layer with a point to drop out is a technique used to prevent the model from overfitting.

- This drop out will reduce 20 percent inputs at the time of model training. After that, we have another dense layer with the 16 neurons and activation function relu.
- Finally, we have a dense output layer with the activation function sigmoid as our target variable contains only zero and one sigmoid is the best choice.
- So we have one input layer, three hidden layers, and one dense output layer. Now we compile our model as this is a binary classification we will use. Binary cross entropy has lost function. We said Adam as the optimizer. It calculates an exponential moving average of the gradient and the squared gradient. We also using accuracy as a metric function.

- It’s time to train our model with the training data set, we said Époch as hundred it means we want to train a model for 100 iterations.
- Now, we evaluate the model as our loss function was binary Cross_entropy and metrics was accuracy. This evaluation function will return those values.

Finally, we predict our outcomes from the model. We will compare the predicted outcome with the expected outcome. We will display only the first 10 results.

So our model predicts the outcome, which is almost similar to the expected result.