for query, comment
1. Suppose the data is generated by model yi = B2.+ Ej. Suppose further that E(...
3. Consider the linear model: Yİ , n where E(Ei)-0. Further α +Ari + Ei for i 1, assume that Σ.r.-0 and Σ r-n. (a) Show that the least square estimates (LSEs) of α and ß are given by à--Ỹ and (b) Show that the LSEs in (a) are unbiased. (c) Assume that E(e-σ2 Yi and E(49)-0 for all i where σ2 > 0. Show that V(β)--and (d) Use (b) and (c) above to show that the LSEs are consistent...
Suppose we have a regression model Yi = bXi + Ei where Y = X = 0 and there is no intercept in the model. Consider a slope estimator ĥ - E(X;)2(Y;) 2(x;)2 Show whether this will yield an unbiased estimate of b or not.
4) Consider n data points with 2 covariates and observation {xi,i, Vi,2, yi); i -1,... ,n, where yi 's are indicator variable for the experiment that is if a particular medicine is effective on some individual. Here, xi1 and ri.2 are age and blood pressure of i th individual, respectively. Our assumption is that the log odds ratio follows a linear model. That is p-P(i-1) and 10i b) What should be a good estimator for ?,A, e) Suppose. On, A,n...
Consider the following slope estimator: b=2i=1 Yi Suppose the true model is ki + Bo + Bicite and the model satisfies the Gauss-Markov conditions. Answer the following questions: (a) What assumption in addition to the Gauss-Markov assumptions is required to estimate the model? (b) Show that in general, b is a biased estimator of B1. (c) Outline the special condition(s) under which b is an unbiased estimator of B1.
Suppose X1, .. ,XM are independent, identically distributed random variables with mean a and variance b2. Let aM ≡ (1/M)Σi=1M aM and bM2≡ (1/(M-1)) Σi=1M (Xi-aM)2. a) Show that aM is an unbiased estimator of E[X]: that is, E[aM] = a. b) Assume that the identity E[ Σi=1M (Xi-aM)2 ] = (M-1) b2 is correct. Show that bM2 is an unbiased estimator of var(X): that is, E[bM2] = b2
Consider the simple linear regression model y - e, where the errors €1, ,en are iid. random variables with Eki-0, var(G)-σ2, i-1, .. . ,n. Solve either one of the questions below. 1. Let Bi be the least squares estimator for B. Show that B is the best linear unbiased estimator for B1. (Note: you can read the proof in wikipedia, but you cannot use the matrix notation in this proof.) 2. Consider a new loss function Lx(A,%) 71 where...
Consider the multiple regression model y = X3 + €, with E(€)=0 and var(€)=oʻI. Problem 1 Gauss-Mrkov theorem (revisited). We already know that E = B and var() = '(X'X)". Consider now another unbiased estimator of 3, say b = AY. Since we are assuming that b is unbiased we reach the conclusion that AX = I (why?). The Gauss-Markov theorem claims that var(b) - var() is positive semi-definite which asks that we investigate q' var(b) - var() q. Show...
B2. (a) Suppose θ is an unknown parameter which is to be estimated from a single measurement X, distributed according to some probability density function f(r0). The Fisher information I(0) is defined by de Show that, under some suitable regularity conditions, the variance of any unbi- ased estimator θ of θ is then bounded by the reciprocal of the Fisher information Var | θ 1(8) Note that the suitable regularity conditions, which are not specified here, allow the interchange of...
For observations {Y, X;}=1, recall that for the model Y = 0 + Box: +e the OLS estimator for {00, Bo}, the minimizer of E. (Y: - a - 3x), is . (X.-X) (Y-Y) and a-Y-3X. - (Xi - x) When the equation (1) is the true data generating process, {X}- are non-stochastic, and {e} are random variables with B (ei) = 0, B(?) = 0, and Ele;e;) = 0 for any i, j = 1,2,...,n and i j, we...
linear stat modeling & regression please , i need the solution for Q3, but i copy Q2 because you need info from Q2 in order to answer Q3. 2) Suppose you have multiple regression set up YxXBp The ridge regression estimator is given by Here, llell'-Σ.< where is a vector of Vik. a) Find the expectation and variance-covariance matrix of Bridge, when X'X is a diagonal matrix with each diagonal entry is eqal to. Com pare these variances with the...