Step 1: The given random variable \( X_n \) follows the Cauchy distribution, which has the probability density function \[ f(x) = \frac{1}{\pi(1 + x^2)}. \] The expected value of \( X_n \) does not exist because the Cauchy distribution has an undefined mean.
Step 2: Consider the transformation \[ Y_n = \frac{1}{2} + \frac{1}{\pi} \tan^{-1}(X_n). \] Since \( X_n \) has the Cauchy distribution, the transformation \( \tan^{-1}(X_n) \) maps \( X_n \) to a random variable that is uniformly distributed on \( \left( 0, \pi \right) \). The expectation of \( Y_n \) is \[ \mathbb{E}[Y_n] = \frac{1}{2} + \frac{1}{\pi} \mathbb{E}[\tan^{-1}(X_n)]. \] It is known that \[ \mathbb{E}[\tan^{-1}(X_n)] = \frac{\pi}{4}. \] Thus, \[ \mathbb{E}[Y_n] = \frac{1}{2} + \frac{1}{\pi} \cdot \frac{\pi}{4} = \frac{1}{2} + \frac{1}{4} = \frac{3}{4}. \] Step 3: By the Strong Law of Large Numbers (SLLN), the sample average of i.i.d. random variables converges to the expected value almost surely. Therefore, \[ \frac{1}{n} \sum_{i=1}^{n} Y_i \xrightarrow{P} \mathbb{E}[Y_n] = \frac{1}{2} { \text{ as } n \to \infty}. \]
Let \( (X, Y)^T \) follow a bivariate normal distribution with \[ E(X) = 2, \, E(Y) = 3, \, {Var}(X) = 16, \, {Var}(Y) = 25, \, {Cov}(X, Y) = 14. \] Then \[ 2\pi \left( \Pr(X>2, Y>3) - \frac{1}{4} \right) \] equals _________ (rounded off to two decimal places).
Let \( X_1, X_2 \) be a random sample from a population having probability density function
\[ f_{\theta}(x) = \begin{cases} e^{(x-\theta)} & \text{if } -\infty < x \leq \theta, \\ 0 & \text{otherwise}, \end{cases} \] where \( \theta \in \mathbb{R} \) is an unknown parameter. Consider testing \( H_0: \theta \geq 0 \) against \( H_1: \theta < 0 \) at level \( \alpha = 0.09 \). Let \( \beta(\theta) \) denote the power function of a uniformly most powerful test. Then \( \beta(\log_e 0.36) \) equals ________ (rounded off to two decimal places).
Let \( X_1, X_2, \dots, X_7 \) be a random sample from a population having the probability density function \[ f(x) = \frac{1}{2} \lambda^3 x^2 e^{-\lambda x}, \quad x>0, \] where \( \lambda>0 \) is an unknown parameter. Let \( \hat{\lambda} \) be the maximum likelihood estimator of \( \lambda \), and \( E(\hat{\lambda} - \lambda) = \alpha \lambda \) be the corresponding bias, where \( \alpha \) is a real constant. Then the value of \( \frac{1}{\alpha} \) equals __________ (answer in integer).