The Chi-square (\(\chi^2\)) test is a non-parametric test. Non-parametric tests do not rely on assumptions about the underlying distribution of the data. The \(\chi^2\)-test is primarily used for categorical data to assess whether the observed frequencies differ significantly from the expected frequencies, making it a distribution-free test. Mathematically, the test statistic for the \(\chi^2\)-test is given by: \[ \chi^2 = \sum \frac{(O_i - E_i)^2}{E_i} \] where: \(O_i\) represents the observed frequency in category \(i\), \(E_i\) represents the expected frequency in category \(i\),
The summation is over all categories. If the calculated value of \(\chi^2\) is greater than the critical value from the Chi-square distribution table for a given significance level, we reject the null hypothesis.
Let \( X_1, X_2 \) be a random sample from a population having probability density function
\[ f_{\theta}(x) = \begin{cases} e^{(x-\theta)} & \text{if } -\infty < x \leq \theta, \\ 0 & \text{otherwise}, \end{cases} \] where \( \theta \in \mathbb{R} \) is an unknown parameter. Consider testing \( H_0: \theta \geq 0 \) against \( H_1: \theta < 0 \) at level \( \alpha = 0.09 \). Let \( \beta(\theta) \) denote the power function of a uniformly most powerful test. Then \( \beta(\log_e 0.36) \) equals ________ (rounded off to two decimal places).
Let \( X_1, X_2 \) be a random sample from a distribution having probability density function