Bias in selecting test data can seriously affect the results of an AI or machine learning project.
When humans choose test data with bias, it may not properly represent the real-world population or problem scenario.
This means the model will perform well on biased test data but fail in real-world applications where data varies more widely.
Bias can also lead to inaccurate measurement of the model’s performance, causing misleading conclusions.
An unbiased, representative test dataset ensures that the model is evaluated fairly for generalization and robustness.
Therefore, the statement is True — human biases during test data selection can indeed harm the testing phase.