In an
Analog-to-Digital Converter (ADC), the input analog signal is quantized into discrete digital levels. The difference between the actual analog input and the closest digital output level is known as
quantization error.
This error is inversely proportional to the number of bits (\(n\)) used in the ADC. More bits mean:
- A finer resolution (smaller step size),
- More quantization levels (\(2^n\)),
- Less quantization error.
Mathematically, for an input range \(V_{\text{range}}\), the quantization step size is: \[ \Delta = \frac{V_{\text{range}}}{2^n} \] and the quantization error is approximately \(\pm \frac{\Delta}{2}\), which clearly decreases as \(n\) increases.
Why the other options are incorrect: - (A) is incorrect — error does not increase with more bits.
- (C) is false — error is closely tied to bit resolution.
- (D) is overly vague — the primary dependency is indeed the number of bits.
Thus, the
quantization error decreases as the number of bits increases.