How to do model evaluation Scikit learn tutorial

Table of Contents

  1. Introduction
  2. Regression evaluation
  3. Classification evaluation
  4. Video Tutorial

1 Introduction

Model evaluation is an integral part of the model development process it fends to find the best model that represents our data and how well the chosen model will work in the future evaluating model performance with the data useful training is not acceptable in data science because it can easily generate over-optimistic.

2 Regression evaluation 

a mean squared error 

  1. mean squared error is a popular metric to measure the error of the regression model.
  2. we have imported the mean squared error function from Sklearm 
  3. Pass the actual and predicted data to mean squared error function and this will calculate the mean squared error 
  4. square rooting of MSE gives us the root of mean squared error (RMSE)
Figure-1

The minimum this error is the good our model 

MSE from cross_val_score

  1. cross-validation randomly splits the training set into ten distinct subsets 
  2. It trains and evaluates the model ten times picking a different fold for evaluation every time and training on the other nine folds the result is an array containing ten evaluation scores

3 classification evaluation

Consider we have a binary classification with actual values and predicted values as shown in the below picture.

Figure-2
  1. we have imported a confusion matrix from sklearn.metrics
  2.  we have passed the actual and predicted data to compute the confusion matrix you first need to have a set of predictions and also the actual target
Figure-3_1
Figure-3_2
Figure – 5_3_3
  1. 4 (1,1) implies to TN stands for true negative which means your actual value was no and you also predicted it as no 
  2. 2 (1,2) is a False-positive count which means your actual value was no but it predicted at it as yes
  3. 1 (2,1) is falsely negative count your actual value was yes but it predicted it as no and lastly, three will be mapped to as 
  4. 3 (2,2) is TP which is truly positive that is your actual value was yes and it also correctly predicted it as yes now now

 Accuracy score

  1. means the proportion of the total number of correct predictions 
  2. first, you need to import the function accuracy_score from SKlearn.metric and then pass your actual values and your predicted values to it the output will be your accuracy score in our case we have got 0.7 as your accuracy score that means 70% of the predicted values were accurate 

Recall 

  1. it is the proportion of actual positive cases that are correctly identified and specificity stands for the proportion of actual negative cases that are correctly identified 
  2. you need to import recall_score from Sklearn. metrics and pass actual and predicted values in the function whereas for specificity you need to do it manually

Precision 

  1. which says that how accurate or precise your model is meant how many positive values were actually there among the predicted ones 
  2. so we have the formula as (truly positive /truly positive +falsely positive )

classification_report

  1. It will provide you with the precision-recall and other tool parameters which
  2. you need to pass on your actual data and your predicted data and it will
  3. give you a report of all these attributes 

AUROC- area under receiver operating characteristic curve

ROC curve plots the true positive rate against the false positively the false positive rate is the ratio of negative instances that are incorrectly classified as positive it is equal to one minus the true negative rate which is the ratio of negative instances that are correctly classified as negative the true negative rate 

Figure-4
  1. To plot the ROC curve you first need to compute the TPR and the FBR that is the true positive rate and the false-positive rate for various threshold values using the ROC underscore curves and this function we have imported from sklearn. metrics and past actual and predicted values as parameters.
  2. The dotted line represents the ROC curve of a purely random classifier a good classifier stays as far away from the line as possible that is towards the top left corner one way to compare classifiers is to measure the area under the curve a 
  3. Perfect classifier will have an AUROC equal to 1 whereas a purely random classifier will have an AUROC qual to 0.5 scikit-learn provides

Video Tutorial

Leave a Reply

Your email address will not be published. Required fields are marked *