The Random Forest algorithm is an ensemble learning method that builds multiple decision trees during training.
It combines the results from each decision tree to make a final prediction by averaging (for regression) or voting (for classification).
This approach helps improve accuracy and reduces the risk of overfitting compared to using a single decision tree.
Option (A), Classification algorithm, is a general category and not specific to decision trees.
Option (B), K-means clustering, is an unsupervised learning algorithm used for grouping data, not for decision trees.
Option (D), K-nearest neighbour algorithm, predicts based on the closest data points and does not use decision trees.
Hence, Random Forest algorithm is the correct answer.