Question:

Which one of the following is true for Ridge Regression (RR)

Show Hint

Remember the key difference between Ridge (L2) and Lasso (L1) regression: - Ridge (L2): Shrinks coefficients, good for multicollinearity, reduces model complexity. - Lasso (L1): Can shrink coefficients to exactly zero, performing automatic feature selection. Both are techniques to reduce variance at the cost of a small increase in bias.
Updated On: Feb 23, 2026
  • The regularizer of RR may increase the bias of the model, but it helps in reducing the variance in prediction.
  • The reg. of RR uses L1 norm.
  • RR aims to reduce the num. of parameters that have –ve value.
  • The reg. in the objective fn. of RR is used to guard against scenarios where the model works well for the test data but poorly for the training data.
Hide Solution
collegedunia
Verified By Collegedunia

The Correct Option is A

Solution and Explanation

Step 1: Understanding the Question:
The question asks to identify the correct statement about Ridge Regression, a type of regularized linear regression.
Step 2: Key Concepts of Ridge Regression:
- Objective Function: Ridge Regression minimizes the sum of squared errors plus a penalty term. The penalty is the L2 norm (sum of squared magnitudes) of the coefficients, multiplied by a hyperparameter $\lambda$. \[ \text{Loss} = \sum (y_i - \hat{y}_i)^2 + \lambda \sum w_j^2 \] - Effect: The L2 penalty term shrinks the regression coefficients towards zero, but not exactly to zero. - Bias-Variance Tradeoff: By shrinking the coefficients, Ridge Regression introduces a small amount of bias into the model (it no longer perfectly minimizes the error on the training data). However, this process makes the model less sensitive to the specific training data, which reduces its variance when making predictions on new, unseen data. This helps prevent overfitting.
Step 3: Detailed Explanation:
Let's evaluate the options based on these concepts:
- (A) The regularizer of RR may increase the bias of the model, but it helps in reducing the variance in prediction. This statement accurately describes the bias-variance tradeoff that is the primary motivation for using Ridge Regression. It trades a small increase in bias for a significant decrease in variance, leading to a better overall model. This is TRUE.
- (B) The reg. of RR uses L1 norm. This is FALSE. Ridge Regression uses the L2 norm ($||w||_2^2$). It is Lasso Regression that uses the L1 norm ($||w||_1$).
- (C) RR aims to reduce the num. of parameters that have –ve value. This is FALSE. The L2 penalty applies to the square of the coefficients, so it treats positive and negative values identically, shrinking both towards zero. It does not target parameters based on their sign.
- (D) The reg. in the objective fn. of RR is used to guard against scenarios where the model works well for the test data but poorly for the training data. This is FALSE. This describes underfitting, or a model that fails to capture the training data's patterns. Regularization is used to combat overfitting, which is when a model works very well on the {training} data but poorly on the {test} data.
Step 4: Final Answer:
The only true statement is (A).
Was this answer helpful?
0
0

Top Questions on Machine Learning

View More Questions

Questions Asked in GATE DA exam

View More Questions