How Random Forest works? – Why we need Random Forest?

How Random Forest works? How to overcome Overfitting? What is Ensemble Learning? Why we need Random Forest? What is Bagging/Bootstrap Aggregation?

In this article, we are going to learn how we get Random Forest from Decision Trees. How Random Forest works! How Ensemble learning helps to overcome Overfitting!

Random Forest is an ensemble of Decision Trees.

Ensemble Learning is a method of improving the Learning of a Machine Learning Model by combining several models to get better predictions.

Random Forest bags the data with replacement and creates a Decision Tree from each data. While predicting, it gets answers from all the trees and takes the majority results as the final prediction.

In this article, we will see this in detail with an example.

To know about decision tree, please check Decision Tree.

A solution to Overfitting with the help of Bootstrap Aggregating/Bagging:

The main problem we have in Decision Tree is, it is “Prone to Overfitting”.

Even though we can control it using parameters, it is a noticeable problem in DT.

Random Forest gives us a solution to overcome this problem.

Overfitting happens when our model learns too much with the training data, that means it learns from the noises also from the training data. But when the test data comes in play, the training received from noise makes the model to fail the prediction.

But we can say that the entire data is not a noise.

Bootstrap Aggregation | Bagging
Bootstrapping

In the above picture, you can see that, from a dataset of size 26, 4 data sets have been drawn.

  1. All datasets are of same size.
  2. A data point existing in one dataset may or may not be existing in other datasets. That’s why we say it as “with replacement”. We did not place any condition when creating datasets, as a point given to one dataset will not be removed from the main dataset. We can say that we copy data points from main population randomly and creating m number of datasets.
  3. There is no restriction in number of datasets that sampled from population. However, each dataset is unique because there is at least 1 data point that differs from other datasets.

How Random Forest works?

N number of Weak learners make a Strong learner.

After bagging, we have n number of independent datasets with the same number of data.

Using the above 4 samples, a Decision Tree is modelled for each sample dataset. So each Decision Tree acts as a separate Model.

Random Forest
Random Forest

From the above picture, 4 Decision Trees gives results and the result which wins the majority voting is set as the final prediction value.

As all the datasets will not have the same data, the noisy data will not be existing in all the decision trees. There may be a few Decision Trees that has noisy data. However, due to the majority voting, those noisy data may be neglected.

Advantages of Random Forest:

  1. Overcomes Overfitting problem of Decision Trees.
  2. Gives High Accuracy as we are not going with one result of a Decision Tree and we are taking the best out of all results.

Conclusion:

In this article, we have learned that Random forest makes a strong learner from many weak learners.

Please find the Jupyter notebook for implementing Random Forest here.

Asha Ponraj
Asha Ponraj

Data science and Machine Learning enthusiast | Software Developer | Blog Writter

Articles: 86

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *