**Table of Contents**

- Kernel_Initializer
- Regularizer
- Activations
- Constrains
- Video Tutorial

1 **Kernel_Initializer**

For the dense layer in Neural Network we need to initialize our weight matrix and our bias vector

Keras is offering a bunch of built-in initializers by default Keras uses the zero initializers for the bias and the glorat uniform initializer for the kernel weight matrix.

**Lets run this in Jupyter Notebook**

First, we have to import the required model’s layers and modules these are the names of available built-in initializers we have assigned those to a new variable initializer

Now using a for loop we will generate models for different initializers. and inspect their weights.

Our first initializer is zeros our input dimension is 2 and the output dimension is 5 so it will generate the weights like 2*5 Matrix and we did not use initializer for bias it will always be zero and in case of ones and constant five all the weights will be one and five in case of small models we generally use random normal or truncated normal initializers random normal generates value using the normal distribution of input data and truncated normal generates value using the truncated normal distribution of input data.

**2 Regularizer**

- Regularizer allow applying penalties on layer parameters or layer activity during optimization these penalties are incorporated in the loss function that the model optimizes
- By default, these aren’t used but they can be useful in helping with the generalization of the model. we have three alternatives for regularization.

L1 uses sum of the absolute weights l2 uses the sum of the squared weights and l1, l2 uses the sum of the absolute and squared weights

**let’s run these using Jupiter notebook**

- First, we are going to import the required model layers and modules from Keras in the first example we will assign L1 regularizer with the value point 0.1 kernels.
- we can get detailed from this get_config function and similarly we can assign l2.

For L1, L2 we have to specify both values

**3 Activation function**

- The activation function is a special function used to find whether a specific neuron is activated or not basically the activation function does a nonlinear transformation of the input data and enable the neurons to learn.
- By default it is set to none but Keras offers us a bunch of built-in activation functions like linear, RELU, sigmoid, Softmax, Tanh… etc

**let’s run these using Jupiter notebook**

- Using activation function similar to the previous example first we will import the model and layers from Keras
- Activations can either be used through an activation layer or through the activation argument supported by layers
- In the first example we assigned a linear function to the dense layer this get_config() function is displayed us that our activation function is linear in the second and third example we will assign elu and relu functions
- Where ELU Implies exponential linear unit and RELU implies rectified linear unit.
- In fact, we have many other options we have to choose our activation function based on our requirements.

**4 constraint**

The last important parameter is constraint this can constrain the values that our weight matrix or our bias vector can take on by default these aren’t activated.

**let’s run these using Jupiter notebook**

- First, we are importing the model layers and modules from Keras.
- We have four options the first one is the max norm this constraints the weights incident to each hidden unit to have a norm less than or equal to the desired value.
- Max value contains the maximum norm for the incoming weights X equals to 0 means it constrain each weight vector of the length of input dimension
- Unit norm constraints the weights incident to each hidden unit to have unit norm similar to the previous example we are using X equal to 0 which means it constraints each weight vector of the length of input dimension in min-max norm we can set minimum and Max none for the incoming weights this rate equals to 1 stand for strict enforcement of the constraint.
- Finally Non-neg constraints the weights to be exactly non-negative.