Cross Validation is a robust statistical method used to assess how well a predictive model will perform on unseen data.
It helps detect overfitting and ensures that the model generalizes well.
The most common type is k-Fold Cross Validation.
In this procedure, the dataset is divided into k equal parts or folds.
The model is trained on (k-1) folds and tested on the remaining fold.
This process is repeated k times, with each fold used once as the test set.
The final performance score is the average of the scores from all k rounds.
Cross Validation provides a more reliable estimate of model performance compared to a single train-test split.
It is widely used for model selection, hyperparameter tuning, and validating predictive accuracy.