# Intro to recurrent neural network RNN

1. Recurrent Neural Network
2. Video Explanation

Recurrent Neural Network

As you can see from the name, you will understand that it is recurring. As we have discussed before about the convolution of neural network, it helps you to learn the speeches that are spatial.

1. The recurrent neural network learns the features over time instead of space, which we call temporal feature learning, which means if you have data that changes over time that is learned recurrent neural network.
2. We can also some images using a recurrent neural network.
3. RNN is a temporal feature learning and CNN is a spatial feature learning.
4. To implement the recurrent neural network in Pytorch, you need to import torch,torch.nn,torch.nn.functional, and NumPy.
5. We are using NumPy here just in case if you need to manipulate some numerical data.
6. As you can see from figure 1, that each box has RNN written inside it and below that it is representing time like t-1, t, and t+1.
1. RNN with (t-1) represents that it is at a state of time which is t-1, RNN with (t) represents it is state of time which is t and RNN (t+1) represents that is the state of time which is t+1.
2. The greens arrows in the figure are the inputs and the red arrows are the outputs.
3. The blue arrows represent the which shows the state of an RNN block.
4. As we can see from the figure, each RNN blocks and each of them give some inputs and also outputs.
5. There is also an output that goes into the next process or block which is the next recurrent neural network.
6. Now let’s get into it more clearly, as you can see in figure 2, that there is a folded image and an unfolded image.
1. There is a time instant it says h at time instance zero, which has some output, as you can see in figure 2, that has moved on to some time instance and finally, it reaches (t-1), from t-2, the output comes into the hidden layer of t-1. And of course, it has some input layer which is shown by a blue circle that takes t-1, but it basically makes from each time.
2. After that, the output goes into the next time instance, which is the hidden layer of the next time instance, and multiply it by the width of the time instance and it has its own input at that time instance.

So lets implement the first basic recurrent neural network

1. Firstly we will name it as singleRNN, don’t forget to inherit the nn.modules.
2. In the first line, it is like instantiating, while creating any other neural network.
3. Then you create a constructor. In figure 3 you can see that we have Wx and Wy which means we have two-time instances in this recurrent neural network. we have the input as input size and we also have the number of neurons in each layer, which means we have kept an option to it a variable that can create a recurrent neural network of any size.
1. Then we add the input size, neurons, then we add the common number of neurons and that is how you can create with each matrices as you can see the figure 3.
2. As you can see we are keeping the bias as (1, neurons). Here I have a time instance for each layer.
3. So now we are declaring the forward pass, to declare the forward pass, you can see the first time instance it is basically, Wx, x + b as you can see in figure 3.
4. For multiplication operation, we are using torch.mm.
5. Then we are adding the biases just like the linear equation and the tanh gives its non -linearity, tanh is the very useful activation function that is in RNN.
6. In a similar way, we are declaring the forward pass for the next time instance, which is time instance Y1 and time instance Y0.
7. As you can see in figure 2, every time the output goes into the new hidden layer which is the forward pass.
8. Next, we are calculating the width of the second layer and multiplying with the output of the first layer and adding those into the input of the previous layers and then we add the bias.
9. After that we will go for the output, which is here we are keeping the inputs as 2, here I have 4 rows and two columns. In this 4 rows stands for 4 features of the input, so here I have two input as you can see in figure 4.