The standard linear regression model is: y = Xw+e, where X is an nxd matrix of...
Consider a linear regression model with n predictor variables X1, . . ., Xk and a target variable y: y= β0+β1X1+…+βkXk+ε . We take n measurements of the predictor and target variables to obtain the following matrix equation: y=Xβ+εy:nx1, X:nxk+1 SSE=εTε, ε=y-Xβ Calculate the number of degrees of freedom of SSE.
The linear regression model in matrix format is Y Xe, with the usual definitions. Let E(elX)- 0 and γ1 0 0 0 Y2 00 01 0 00 .0 0 0 00N 0 0 0'YN 0 0 0YNL Notice that as a covariance matrix, Σ is symmetric and nonnegative definite. ) Derive Var (BoLSX). (ii) Let A: = CY be any other linear unbiased estimator where C, is an N × K function of X. Prove Var (β|X) > (X'Σ-1X)-1. The...
2. The linear regression model in matrix format is Y Χβ + e, with the usual definitions Let E(elX) 0 and T1 0 0 01 0 r2 00 0 0 0 0.0 0 γΝ 0 00 Notice that as a covariance matrix, Σ is bymmetric and nonnegative definite () Derive Var (0LS|x). (ii) Let B- CY be any other linear unbiased estimator where C' is an N x K function of X. Prove Var (BIX) 2 (X-x)-1 3. An oracle...
Decide (with short explanations) whether the following statements are true or false. e) In a simple linear regression model with explanatory variable x and outcome variable y, we have these summary statisties z-10, s/-3 sy-5 and у-20. For a new data point with x = 13, it is possible that the predicted value is y = 26. f A standard multiple regression model with continuous predictors and r2, a categorical predictor T with four values, an interaction between a and...
The random vector Y = (Y1, ..., Yn)T is such that Y = Xβ + ε, where X is an n × p full-rank matrix of known constants, β is a p-length vector of unknown parameters, and ε is an n-length vector of random variables. A multiple linear regression model is fitted to the data. (a) Write down the multiple linear regression model assumptions in matrix format. (b) Derive the least squares estimator β^ of β. (c) Using the data:...
Q. 1 Consider the multiple linear regression model Y = x3 + €, where e indep MV N(0,0²V) and V +In is a diagonal matrix. a) Derive the weighted least squares estimator for B, i.e., Owls. b) Show Bwis is an unbiased estimator for B. c) Derive the variances of w ls and the OLS estimator of 8. Is the OLS estimator of still the BLUE? In one sentence, explain why or why not.
Consider the simple linear regression model y - e, where the errors €1, ,en are iid. random variables with Eki-0, var(G)-σ2, i-1, .. . ,n. Solve either one of the questions below. 1. Let Bi be the least squares estimator for B. Show that B is the best linear unbiased estimator for B1. (Note: you can read the proof in wikipedia, but you cannot use the matrix notation in this proof.) 2. Consider a new loss function Lx(A,%) 71 where...
4. Consider the simple linear regression model: Vi=Ay+βίζί +Ej, for i=1, . . . , n. Write out the expression for y, β,e, and X such that the model can be written in matrix orim
Suppose we have the full rank linear model y = XA+ Ewiun xp design matrix X, normal errors E N (0,0?Inxn). Let b be the least squares estimator of B. (C) Prove that (b-B)? XT X(6-8) o2 follows the x? distribution. Hint: Write Xb in terms of X, B and e. (d) Hence derive a 100(1 - a)% joint confidence region of ß given in notes (b - B) TXTX(b-)/po<Fa:pon-p, where Faip,n-p denotes the upper ath quantile of the Fpin-p...
In the context of multiple regression, define the n X n matrix M =- X(X'X)-'X'. (i) Show that M is symmetric and idempotent. (ii) Prove that m, the diagonals of the matrix M, satisfy 0 sm s 1 for t = 1, 2, ..., n. (iii) Consider the linear model y = XB + u satisfies the Gauss-Markov Assumptions. Let û be the vector of OLS residuals. Show that Eſûù' x) = oʻM (iv) Conclude that while the errors {u:...