We are given a random sample \( X_1, X_2, \dots, X_n \) from a normal distribution \( N(-\theta, \theta) \), where \( \theta > 0 \) is the unknown parameter. We need to determine which of the following options is correct.
Step 1: Understanding the Distribution
The random variables \( X_1, X_2, \dots, X_n \) are from the normal distribution \( N(-\theta, \theta) \), which means:
The mean of the distribution is \( -\theta \),
The variance of the distribution is \( \theta \).
Step 2: Factorization Theorem
To determine which statistic is minimal sufficient, we apply the Factorization Theorem, which states that a statistic is sufficient if the likelihood function can be factored into two parts: one depending on the data through the statistic, and the other not depending on the parameter. The likelihood function for the normal distribution is given by: \[ L(\theta) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi \theta}} \exp \left( -\frac{(X_i + \theta)^2}{2\theta} \right). \] This can be written as: \[ L(\theta) = \frac{1}{(\sqrt{2\pi \theta})^n} \exp \left( -\sum_{i=1}^{n} \frac{(X_i + \theta)^2}{2\theta} \right), \] which depends on the sum \( \sum_{i=1}^{n} X_i^2 \), making it crucial in determining the likelihood.
Step 3: Identifying the Minimal Sufficient Statistic
The statistic \( \sum_{i=1}^{n} X_i^2 \) is sufficient for \( \theta \) because it contains all the information needed to estimate \( \theta \). Additionally, it is minimal because there are no redundant pieces of information: it captures both the mean and variance of the distribution. Thus, \( \sum_{i=1}^{n} X_i^2 \) is the minimal sufficient statistic.
Step 4: Re-evaluating the Other Options
(A) \( \sum_{i=1}^{n} X_i \): This is a sufficient statistic, but not minimal, because it does not capture information about the variance of the distribution.
(C) \( \left( \frac{1}{n} \sum_{i=1}^{n} X_i, \frac{1}{n-1} \sum_{j=1}^{n} (X_j - \frac{1}{n} \sum_{i=1}^{n} X_i)^2 \right) \): While this is a complete statistic, it is not minimal since the sum of squares \( \sum_{i=1}^{n} X_i^2 \) alone is already sufficient and minimal.
(D) \( -\frac{1}{n} \sum_{i=1}^{n} X_i \): This is the unbiased estimator of \( \theta \), but not minimal or sufficient for the variance in this case. Thus, the correct answer is (B) \( \sum_{i=1}^{n} X_i^2 \).
Let \( (X, Y)^T \) follow a bivariate normal distribution with \[ E(X) = 2, \, E(Y) = 3, \, {Var}(X) = 16, \, {Var}(Y) = 25, \, {Cov}(X, Y) = 14. \] Then \[ 2\pi \left( \Pr(X>2, Y>3) - \frac{1}{4} \right) \] equals _________ (rounded off to two decimal places).
Let \( X_1, X_2 \) be a random sample from a population having probability density function
\[ f_{\theta}(x) = \begin{cases} e^{(x-\theta)} & \text{if } -\infty < x \leq \theta, \\ 0 & \text{otherwise}, \end{cases} \] where \( \theta \in \mathbb{R} \) is an unknown parameter. Consider testing \( H_0: \theta \geq 0 \) against \( H_1: \theta < 0 \) at level \( \alpha = 0.09 \). Let \( \beta(\theta) \) denote the power function of a uniformly most powerful test. Then \( \beta(\log_e 0.36) \) equals ________ (rounded off to two decimal places).
Let \( X_1, X_2, \dots, X_7 \) be a random sample from a population having the probability density function \[ f(x) = \frac{1}{2} \lambda^3 x^2 e^{-\lambda x}, \quad x>0, \] where \( \lambda>0 \) is an unknown parameter. Let \( \hat{\lambda} \) be the maximum likelihood estimator of \( \lambda \), and \( E(\hat{\lambda} - \lambda) = \alpha \lambda \) be the corresponding bias, where \( \alpha \) is a real constant. Then the value of \( \frac{1}{\alpha} \) equals __________ (answer in integer).