Question

5. Let us assume that X is non-random. Consider Y = XB+U with E(U) = 0 and E (UU) = 0282. Assume that N2 is known. Let Bols
0 0
Add a comment Improve this question Transcribed image text
Answer #1

Given Model Y-xp +u Eu)- D, E (uu) - E Fre-mutipying the wodol with x XX-xXB+xu Fas= ) Cxx)xtu 6pF+(xxu) E [fad- FtE [x) (u

Add a comment
Know the answer?
Add Answer to:
5. Let us assume that X is non-random. Consider Y = XB+U with E(U) = 0...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
  • For observations {Y, X;}=1, recall that for the model Y = 0 + Box: +e the...

    For observations {Y, X;}=1, recall that for the model Y = 0 + Box: +e the OLS estimator for {00, Bo}, the minimizer of E. (Y: - a - 3x), is . (X.-X) (Y-Y) and a-Y-3X. - (Xi - x) When the equation (1) is the true data generating process, {X}- are non-stochastic, and {e} are random variables with B (ei) = 0, B(?) = 0, and Ele;e;) = 0 for any i, j = 1,2,...,n and i j, we...

  • Consider the following linear regression model 1. For any X = x, let Y = xB+U,...

    Consider the following linear regression model 1. For any X = x, let Y = xB+U, where B erk. 2. X is exogenous. 3. The probability model is {f(u; ) is a distribution on R: Ef [U] = 0, VAR; [U] = 62,0 >0}. 4. Sampling model: {Y}}}=1 is an independent sample, sequentially generated using Y; = xiß +Ui, where the U; are IID(0,62). (i) Let K > 0 be a given number. We wish to estimate B using least-squares...

  • 2. In the regression model Y-Χβ+ E, Xis a fixed n x k matrix of rank...

    2. In the regression model Y-Χβ+ E, Xis a fixed n x k matrix of rank k S11, E(c)-0 and E(es')-σ2Ω where Ω is a known non-singular matrix. The GiLS estimator of B is given by the formula Consider the following data 16 31 2 3 51 4 10 Assuming that Ay a) b) Calculate the GLS estimate of β in the model Y,Xß + ε Calculate the OLS estimate c) Compare it the two estimates and comment on efficiency.

  • 2. Assume the structural equation is where E [ui|Xi] = 0. It was discovered that we...

    2. Assume the structural equation is where E [ui|Xi] = 0. It was discovered that we observe ri with a measurement error wi instead of the real value X X-Xi + w It is known that E [wi-0, V (wi) %-cou (Xi, wi)-cov is based on regressing Y, on a constant and X. (u,,wi) 0. The OLS estimator (i) Find the value to which the OLS estimator of β¡ is consistent for. (ii) Is the value equal to the true...

  • (a) Let X and Y be independent random variables both with the same mean u +0....

    (a) Let X and Y be independent random variables both with the same mean u +0. Define a new random variable W = ax + by, where a and b are constants. (i) Obtain an expression for E(W). (ii) What constraint is there on the values of a and b so that W is an unbiased estimator of u? Hence write all unbiased versions of W as a formula involving a, X and Y only (and not b). [2]

  • 3. Let the random variables X and Y have the joint probability density function 0 y...

    3. Let the random variables X and Y have the joint probability density function 0 y 1, 0 x < y fxy(x, y)y otherwise (a) Compute the joint expectation E(XY) (b) Compute the marginal expectations E(X) and E (Y) (c) Compute the covariance Cov(X, Y)

  • 2. The linear regression model in matrix format is Y Χβ + e, with the usual definitions Let E(elX...

    2. The linear regression model in matrix format is Y Χβ + e, with the usual definitions Let E(elX) 0 and T1 0 0 01 0 r2 00 0 0 0 0.0 0 γΝ 0 00 Notice that as a covariance matrix, Σ is bymmetric and nonnegative definite () Derive Var (0LS|x). (ii) Let B- CY be any other linear unbiased estimator where C' is an N x K function of X. Prove Var (BIX) 2 (X-x)-1 3. An oracle...

  • True or False: Given the necessary assumption: E(u|X) = 0, β ̂ is a random variable...

    True or False: Given the necessary assumption: E(u|X) = 0, β ̂ is a random variable with a distribution centered at 0. Given the necessary assumption: E(u|X) =0, β ̂ is a random variable with a distribution centered at β. Adding more independent variables to a model will only increase R2 if they provide meaningful variation. Adj R2 measures the proportion of the variation in the dependent variable that has been explained by the variation in the independent variable. If...

  • 5. (a) (6 marks) Let X be a random variable following N(2.4). Let Y be a...

    5. (a) (6 marks) Let X be a random variable following N(2.4). Let Y be a random variable following N(1.8). Assume X and Y are independent. Let W-min(x.Y). Find P(W 3) (b) (8 marks) The continuous random variables X and Y have the following joint probability density function: 4x 0, otherwise Find the joint probability density function of U and V where U-X+Y and -ky Also draw the support of the joint probability density function of Uand V (o (5...

  • 1. Let $(x) = 2x2 and let Y = $(x). (a) Consider the case X ~U(-1,1)....

    1. Let $(x) = 2x2 and let Y = $(x). (a) Consider the case X ~U(-1,1). Obtain fy and compute E[Y] (b) Now instead assume that Y ~ U(0,1/2) and that X is a continuous random variable. Explain carefully why it is possible to choose fx such that fx (2) = 0 whenever 21 > 1. Obtain an expression linking fx(2) to fx(-x) for 3 € (-1,1). Show that E[X] = -2/3 + 2 S xfx(x) dx. Using your expression...

ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT