Question:

Consider designing a linear binary classifier \( f(x) = \text{sign}(w^T x + b), x \in \mathbb{R}^2 \) on the following training data: 



Class-2: \( \left\{ \left( \begin{array}{c} 0 \\ 0 \end{array} \right) \right\} \) 

Hard-margin support vector machine (SVM) formulation is solved to obtain \( w \) and \( b \). Which of the following options is/are correct?

Show Hint

In a hard-margin SVM, the margin is directly related to the norm of the weight vector. Also, the support vectors play a key role in determining the optimal hyperplane.
Updated On: Apr 4, 2025
  • \( w = \left( \begin{array}{c} 4 \\ 4 \end{array} \right) \) and \( b = 1 \)
  • The number of support vectors is 3
  • The margin is \( \sqrt{2} \)
  • Training accuracy is 98\%
Hide Solution
collegedunia
Verified By Collegedunia

The Correct Option is B, C

Solution and Explanation

We are given a set of training points with two classes: 

\[ \text{Class-1: } \left\{ \left( \begin{array}{c} 2 \\ 0 \end{array} \right), \left( \begin{array}{c} 0 \\ 2 \end{array} \right), \left( \begin{array}{c} 2 \\ 2 \end{array} \right) \right\} \quad \text{Class-2: } \left\{ \left( \begin{array}{c} 0 \\ 0 \end{array} \right) \right\} \] 

For the hard-margin SVM, the objective is to find the weight vector \( w \) and bias \( b \) such that the classifier maximizes the margin between the two classes. 

The hard-margin SVM problem can be formulated as: \[ \min_{w,b} \frac{1}{2} \|w\|^2 \quad \text{subject to:} \quad y_i(w^T x_i + b) \geq 1, \, \forall i \] Where:
\( w \) is the weight vector
\( b \) is the bias
\( x_i \) are the training data points
\( y_i \) are the corresponding class labels (1 for Class-1, -1 for Class-2)
Step 1: Determine the Support Vectors
In the SVM model, the support vectors are the data points that lie closest to the decision boundary. These are the points that are essential in defining the hyperplane, and they lie on the margin boundaries. 

In our case, the support vectors are:
\( \left( 2, 0 \right) \) from Class-1
\( \left( 0, 2 \right) \) from Class-1
\( \left( 2, 2 \right) \) from Class-1
Thus, the total number of support vectors is 3. Step 2: Find the Weight Vector \( w \)
For the SVM classifier, we need to compute the weight vector \( w \) and bias \( b \). The SVM formulation ensures that the separating hyperplane maximizes the margin while ensuring that the support vectors are correctly classified. 

After solving the optimization problem, the weight vector \( w \) obtained is: \[ w = \left( \begin{array}{c} 4 \\ 4 \end{array} \right) \] The bias \( b \) is calculated using the condition that the decision boundary passes through the support vectors. In this case, the bias \( b \) turns out to be 1.
Step 3: Calculate the Margin
The margin in an SVM is given by the formula: \[ \text{Margin} = \frac{1}{\|w\|} \] Where \( \|w\| \) is the norm of the weight vector. For our case: \[ \|w\| = \sqrt{4^2 + 4^2} = \sqrt{32} = 4\sqrt{2} \] Therefore, the margin is: \[ \text{Margin} = \frac{1}{4\sqrt{2}} = \frac{\sqrt{2}}{2} \] Step 4: Training Accuracy
While the problem asks about training accuracy, we do not have enough information (such as the testing set) to calculate the training accuracy precisely. Therefore, we cannot confirm option (D) with certainty.

Was this answer helpful?
0
0

Questions Asked in GATE DA exam

View More Questions