How to compile and train model using keras

Table of Contents

  1. Model Compiling
  2. Model Training
  3. Model Evaluation and Prediction
  4. Implementation with Keras
  5. Video Tutorial

1. Model Compiling

  1. Deep analysis is not a single job, we have to go through a good number of complex steps.
  2. For Data selection, We can select built-in data sets or data sets from external sources. 
  3. Next is data processing, This process differs based on sequence, text, and image data. After that model generation, we can choose sequential or functional API and different layers to build a new model then model compilation. In this step, we have to select a problem-specific lost function and optimizer for model training. In this step, we fit our training data on the compiler model.
  4. Then the model evaluation will guarantee our prediction for each input and output pair and collect scores. This will give us an idea of how will we have modeled the data. 
  5. Finally, we predict outcomes from our model. We have already worked with the first three steps in previous blogs. 
Figure-1
  1. Keras model provides the method compile to compile the model. It contains three arguments. The loss function is set as binary cross-entropy optimizer as Adam and matrices as accuracy.
  2. We have many other options for loss functions and we need to select them based on our algorithm. We use mean square error, mean absolute error, for regression problems. Hinge or binary cross-entropy is used for classification. 
  3. We have also a good number of choices in the optimizers. The first one is SGD, which is the stochastic gradient descent optimizer, which includes support for momentum, learning rate, and Nesterov Momentum.
  4. Rmsprop maintains power parameter learning rates that are adopted based on an average of recent magnitudes of gradients for weight. Add a grade or adaptive gradient is an optimizer with parameter specific learning rates which are adopted relative to how frequently a perimeter gets updated during training.
  5. The more updates a parameter receives, the smaller the learning rate added. Adadelta is a more robust extension of the grade that adapts learning rates based on a moving window of gradient updates.
  6. Adam realizes the benefit of both Adagrad and rmsprop. It calculates an exponential moving average of the gradient and that squared gradient and parameters Beta1 and Beta 2 control the moving averages. 
  7. No mattresses are metric function is similar to a lost function, except that the results from evaluating a metric are not used. When training the model, Kerass provides the following options and we can choose anyone from it.

2. Model Training:

  1. Models are trained by using this fit function, we have to specify input and output in the function contains two important arguments epochs and Batch_sizes.
  2.  In epochs, we have to specify the number of times the model is needed to be evaluated during training. 
  3. The batch_size is the number of samples processed before the model is updated. Suppose we have a dataset with a hundred samples and we choose batch size five. 
  4. It means the data set will be divided into twenty batches, each with five samples.
  5. Verbose, we say to zero, which means it will run silently. 
Figure-2

3. Model evaluation and Prediction:

  1.  Kerass provides evaluate the function, which does the evaluation of the model. It returns the loss of the model and the accuracy of the model on the data set. 
  2. Finally, model prediction. Keras provides this predictive method to gauge the prediction of the trained model. It only returns numbed by arrays of predictions.

4. Implementation with Keras

  1. We are going to construct a model, then compile and train our model. After that, we evaluate our model and finally predict outcomes using the same model. 
  2. First, we import all the required methods, layers, and models from Keras as we use a sequential model.
  3. We load our debate data set, which is available in our working directory.
Figure-3
  1. Our dataset contains seven hundred sixty-eight samples of nine features. We split our dataset into Inputs eight feature and the final column as output.
  2. We construct our model using the sequential API. We are taking only two dense layers to avoid complexity in the final layer. We are using the sigmoid activation function so the output will be in the range of zero and one.
  3. It has three layers in the first input layer input and the output dimension is the same eight in the second layer, the input dimension is eight, and the output dimension is two.
  4. In the final dense layer, the input dimension is to an output dimension is one.
  5. Now we compile our model as this is a classification problem. We use binary cross-entropy as the loss function.  
Figure-4
  1. We are going to define Adam as the optimizer and finally choosing accuracy as a metrics function. 
  2. While training our dataset will be divided into 77 batches with ten samples. Each of these verbose zero will run silently.
  3. So model training is done. Now we evaluate our model using the evaluate function which will take input and output data as arguments. This function will return the loss and accuracy of the model and the accuracy of our model is around 65 percent.
  4. If we increase the layers of my model, the accuracy will increase. Its time for predicting our outcomes using the input data. 
  5. Now we compare our result with the expected result. We have taken only 10 samples as an example and the results are in below the image.
FIgure-5

Video Tutorial

Leave a Reply

Your email address will not be published. Required fields are marked *