neural networks in machine learning Scikit learn tutorial

  1. what are Neurons?
  2. Artificial Neural Networks
  3. Terminologies in ANN
  4. Simple Ann Implementation in numpy
  5. ANN implementation in sklearn
  6. Heart disease prediction with Neural networks.
  7. Video Tutorial ( 3 videos )

1 what are Neurons?

An artificial neural network is simply coming from the word neural network which is derived from something known as a neuron.

Figure-1
  1. This is our brain and this actually works on the electrical pulses of phosphorus salts which are flowing through the synapses of the neuron.
  2. When one decision is taken by one synapse in the brain a trigger is taken and we take a specific decision from our brain that is how the neuron works.
  3.  If we look at the image we can see there is a lot of connection that is being shown using the synapse taking the dots and synapses which are connected to each other making them a very dense neuron possible.
  4. The human brain is considered the world’s fastest and most powerful system that can ever be made.

2 Artificial Neural Networks

Biological Neuron vs Artificial Neural Networks

  1. Biological neuron contains the dendrites , main cell body,Neucleus and followed by the axons .
  2. when we get the information from this dendtites it gets into neucleus and get the threshold and it working for decision then once that is done we take the output from terminals .
  3. Artificial neural networks which are inspired from biological neuron works similarly as it.
Figure-2

Artificial Neural Networks

  1. What happens here is we have an input unit from here that can be of multiple inputs also followed by the processing phase and this is the place where the decision part takes to happen and the trigger returns the output.
  2. Artificial neural network concepts were introduced in the 1960 but it’s not able to implement at that time because artificial neural networks need majorly two things one that it needs a lot of data to process something followed by it needs a lot of computational power. 
  3. As we know the data is increasing or an exponential rate in today’s world followed by the computation power that we are able to process these Artificial neural networks to computers now.
  4. With the series of libraries like sklearn it’s very much easy to implement the same now let’s talk about the working of an artificial neural network or a deep neural network.
  5. The input layer now takes the input and we have another two major things in an artificial network that is the weight and bias.
Figure-3
  1.  when our input goes to the second layer there are specific triggers that can set a threshold condition like if the value is greater than one you allow it to pass or else you do not allow it to pass. 
  2. That kind of decision-making happens in neural networks so what happens is they get pretty good at understanding the concepts that we are giving them from the data. 
  3. once a decision is passed through one specific layer it goes to the second layer making of the specific decision that is needed then once all the layers have been passed ANN finally takes the decision at the output layer.
  4. The processing phase of a neural network makes Artificial Neural networks as a black box.
  5. Artificial neural networks are getting so popular nowadays and they are amazing and they can understand a pattern for a specific problem that human is not able to draw a pattern.

3 Terminologies in ANN

Seeding

  1. It is a distribution given in the random subclass of the NumPy library. when we generate a specific set of numbers usually for your training purposes in the neural network you get the same orientation of the numbers which is being generated over each iteration.
  2. The major problem that causes is if we are not using seeding is you’ll be always having a different set of numbers for training and your model will not actually be able to learn.

Error and Gradient decent

A neural network works on the basis of reducing the error to a minimum (Loss minimization) and that is done by the concept of optimization called gradient descent.

Figure-4
  1. Now error we have is a parabolic function and we need to find optimal weights such that the error function is minimized.
  2. The main function of gradient descent is to optimize loss function and return the optimal learned weights.

 forward propagation 

  1. We discussed in section blog about how does the synapse takes the input threshold working inside the Artificial neuron and followed by the output that is obtained 
  2. So to learn from that when a specific threshold or the decision shifts from one specific layer to be another specific layer we call that as a forward propagation through which the neural network learns to take the decision-taking from the past layers now that all are the main concepts of implementation of neural networks.

Back propagation

  1. This is what made the neural networks the best model that we have in machine learning.
  2. when a neural network took a decision and it was wrong what it will do it will trace back to the previous layer and then try to understand how and where did it wrong.
  3.  The benefit that happens with that is the neural network is able to understand the problem that it had and able to adjust its weights.
  4. When it is able to adjust its weights it can understand that the problem is been done and it tries to rectify it from the next time so changing and iteration and changing the weights and the bias in a neural network is the only concept leading us to the minimum error.

Simple Ann Implementation in numpy

  1. Design so many machine-learning libraries are done through NumPy and that simply stands for Numerical Python.
  2. Importing numpy as np and defining the sigmoid function as activation function through this decision part of the neuron takes place. his takes decision according to a specific threshold given by previous hidden layers.
Figure-5
  1. If we observe the graph of sigmoid function the minimum value lies at zero and the maximum value lies at one.
  2. If we get a specific value for example anything greater than 0.5 for example here that value will take decision as1 any value obtained above 0.5 to any minor differences will be taken as 1 any value obtained less than 0.5 will be taken as 0.
Figure-6
  1. That is the simple walking affair sigmoid function which we have defined here ok now once that is done now we have the two inputs.
  2. We are initializing the four x and y values such that each x value belongs to 3 dimensions.
  3. Now once our data is being taken we develop a seeding so that we initialize the weight in our specific order for each and every time then we develop the synapse after that once that is done. 
  4. We take the sigmoid function and take an iteration of 10,000 iterations. It can be taken as much as possible but that Is 10000 more than enough for a small dataset.
  5. After back-propagating over and over again so that the computer can actually try to predict the values and then compare it with the output and see how good and efficient it.
  6. See for the first output of Y it was 0after complete iterations our neural network has to predict as 0.009 that is 0.009almost close, the second value was also 0 that was even predicted better, the third value was to give one and model predicted it as 0.99 and followed by 0.992 for fourth input.

5 ANN implementation in sklearn

  1. In the previous section, we looked at the concepts of the neural networks and the terminologies which are necessary to understand and followed by a sample code of a very basic Neural Network 
  2. In this blog let’s see how a neural network can be implemented in sklearn
  3. Scikit-learn provides the neural network without the GPU integration and it provides beautiful and elegant documentation for working of a different layer of Neural Networks and the mathematical functions including the loss functions.
  4. Let’s look at a very basic implementation code, so we are calling an MLP class which stands for multi-layer perceptron.
  5.  It is present inside the neural network subclass of the scikit-learn library.
  6. we are initializing X as the two values that we are taking as if the input is 0,0 for the corresponding Y value will be having is 0 and if the value is 1, 1 it will be having the value 1.
Figure-7
  1. Then we call the instance of the class making the MLP classifier and giving the necessary inputs.
  2. The two main inputs that we need today that is the hidden layer sizes of the 5 neurons followed by 2 layers and a random state of 1 
  3. So once we have everything ready we’ll train our model by fitting the data.
  4. Then we need to predict what will be the output.
  5.  We can see that when the values are greater than zero we are able to get an output of 1 and the values when are zero less than zero we can simply see the output as 0.
Figure-8

In the previous blog, we were running the model 10000 times in this one scikit-learn allows you to save that practice and runs it till the time it gets the optimized results. 

6 Heart disease prediction with Neural networks

Now the data is taken as heart disease prediction data in which the user needs to predict whether on given instances the person will be having a heart attack or not.

Figure-9
  1.  we can see the input nodes as 9, Number of input nodes will depend on how many features you are providing to the new network 
  2. Each hidden layer contains 14 neurons and it followed by the number of output layers and the output lengths are defined in the classifier of how many outputs the user needs example is 1 0 or the binary usually is of two output layers.
Figure-10

We are calling the neural network in the specific line as MLP classifier and we passed the parameters such as hidden size, hidden layers in the form of a dictionary.

Figure-11
  1. Once we have everything ready we will start training our model using the fit function.
  2. Then we use clf. predict instance of predicted values of X test 
Figure-12

We can see the results which are pretty good for this specific data and the model is able to obtain an accuracy of 76% and a precision of 73%.

Figure-13

A very small size Neural network can understand data very much easily and is able to draw patterns as much as possible, and it’s usually preferred more amount of data that you feed to your network more it will be able to understand.

Video Tutorial-1

Video Tutorial-2

Video Tutorial-3

Leave a Reply

Your email address will not be published. Required fields are marked *