In an Analog-to-Digital Converter (ADC), the input analog signal is quantized into discrete digital levels. The difference between the actual analog input and the closest digital output level is known as quantization error.
This error is inversely proportional to the number of bits (\(n\)) used in the ADC. More bits mean:
Mathematically, for an input range \(V_{\text{range}}\), the quantization step size is: \[ \Delta = \frac{V_{\text{range}}}{2^n} \] and the quantization error is approximately \(\pm \frac{\Delta}{2}\), which clearly decreases as \(n\) increases.
Why the other options are incorrect:
Thus, the quantization error decreases as the number of bits increases.