Important parameters of keras

Table of Contents

  1. Kernel_Initializer
  2. Regularizer
  3. Activations
  4. Constrains
  5. Video Tutorial

1 Kernel_Initializer

For the dense layer in Neural Network we need to initialize our weight matrix and our bias vector 

Figure-1

Keras is offering a bunch of built-in initializers by default Keras uses the zero initializers for the bias and the glorat uniform initializer for the kernel weight matrix. 

Lets run this in Jupyter Notebook

First, we have to import the required model’s layers and modules these are the names of available built-in initializers we have assigned those to a new variable initializer

Figure-2

Now using a for loop we will generate models for different initializers. and inspect their weights.

FIgure-3

Our first initializer is zeros our input dimension is 2 and the output dimension is 5 so it will generate the weights like 2*5 Matrix and we did not use initializer for bias it will always be zero and in case of ones and constant five all the weights will be one and five in case of small models we generally use random normal or truncated normal initializers random normal generates value using the normal distribution of input data and truncated normal generates value using the truncated normal distribution of input data.

2 Regularizer

  1. Regularizer allow applying penalties on layer parameters or layer activity during optimization these penalties are incorporated in the loss function that the model optimizes
  2. By default, these aren’t used but they can be useful in helping with the generalization of the model. we have three alternatives for regularization.
Figure-4

L1 uses sum of the absolute weights l2 uses the sum of the squared weights and l1, l2 uses the sum of the absolute and squared weights

let’s run these using Jupiter notebook

  1. First, we are going to import the required model layers and modules from Keras in the first example we will assign L1 regularizer with the value point 0.1 kernels.
  2.  we can get detailed from this get_config function and similarly we can assign l2.
Figure-5

For L1, L2 we have to specify both values

3 Activation function

  1. The activation function is a special function used to find whether a specific neuron is activated or not basically the activation function does a nonlinear transformation of the input data and enable the neurons to learn.
  2. By default it is set to none but Keras offers us a bunch of built-in activation functions like linear, RELU, sigmoid, Softmax, Tanh… etc 

let’s run these using Jupiter notebook

Figure-6
  1. Using activation function similar to the previous example first we will import the model and layers from Keras 
  2. Activations can either be used through an activation layer or through the activation argument supported by layers 
  3. In the first example we assigned a linear function to the dense layer this get_config() function is displayed us that our activation function is linear in the second and third example we will assign elu and relu functions
  4. Where ELU Implies exponential linear unit and RELU implies rectified linear unit.
  5. In fact, we have many other options we have to choose our activation function based on our requirements.

4 constraint

The last important parameter is constraint this can constrain the values that our weight matrix or our bias vector can take on by default these aren’t activated.

Figure-7

let’s run these using Jupiter notebook

Figure-8
  1. First, we are importing the model layers and modules from Keras.
  2. We have four options the first one is the max norm this constraints the weights incident to each hidden unit to have a norm less than or equal to the desired value. 
  3. Max value contains the maximum norm for the incoming weights X equals to 0 means it constrain each weight vector of the length of input dimension 
  4. Unit norm constraints the weights incident to each hidden unit to have unit norm similar to the previous example we are using X equal to 0 which means it constraints each weight vector of the length of input dimension in min-max norm we can set minimum and Max none for the incoming weights this rate equals to 1 stand for strict enforcement of the constraint. 
  5. Finally Non-neg constraints the weights to be exactly non-negative. 
Figure-9

Video Tutorial

Leave a Reply

Your email address will not be published. Required fields are marked *