4. Suppose we run a regression model Y = β0+AX+U when the true model is Y-a0+ α1X2 + V. Assume th...
Consider the regression model y=β0+β1x1+β2x2+u Suppose this is estimated by Feasible Weighted Least Squares (FWLS) assuming a conditional variance function Varux=σ2h(x). Which of the following statements is correct? A) The function h(x) does not need to be estimated as part of the procedure B) If the assumption about the conditional variance of the error term is incorrect, then FWLS is still consistent. C) FWLS is the best linear unbiased estimator when there is heteroscedasticity. D) None of the above answers...
Consider the following simple regression model: a. Suppose that OLS assumptions 1 to 4 hold true. We know that homoskedasticity assumption is statedas: Var[UjIx] = σ2 for all i Now, suppose that homoskedasticity does not hold. Mathematically, this is expressed as In other words, the subscript i in σ12 means that the conditional variance of errors for each individual i is different. Under heteroskedasticity, we can derive the expression for the variance of Var(B) as SST Where SSTx is the...
1. If a true model of simple linear regression reads: yi −y ̄ = β0 +β1(xi −x ̄)+εi for i = 1, 2, · · · , n, showβ0 =0andβˆ0 =0. (1pt) (hint: use the formula of estimator βˆ0 = y ̄ − βˆ1x ̄.)
Suppose we fit the simple linear regression model (with the usual assumptions) Y = Bo+B1X+ € and get the estimated regression model ♡ = bo+bix What aspect or characteristic of the distribution of Y does o estimate? the value of Y for a given value of X the total variability in Y that is explained by X the population mean number of Y values above the mean of Y when X = 0 the increase in the mean of Y...
2. Suppose we are given data on n observations (zi,Y), i = 1, , n, and we have a linear model, so that E(X) = β0+Axi. Let ßi-SXY/SXX and β0 = Y-Ax be the least-square estimates given in lecture. (a) Show that E(5xx) = ẢSXX and E(T) = β0+A2. (b) Use (a) to show that E(A) = and E(%) = A- In other words, these are unbiased estimators (c) The fitted values Y BotBr, are used as estimates of E(Y),...
(Do this problem without using R) Consider the simple linear regression model y =β0 + β1x + ε, where the errors are independent and normally distributed, with mean zero and constant variance σ2. Suppose we observe 4 observations x = (1, 1, −1, −1) and y = (5, 3, 4, 0). (a) Fit the simple linear regression model to this data and report the fitted regression line. (b) Carry out a test of hypotheses using α = 0.05 to determine...
2. Suppose we are given data on n observations (x,Y), i 1,... , n, and we have a linear model, = SXY/SXX and A,-ㄚ-Ax be the least-square estimates so that E(X) = β0 +ATp Let given in lecture. (a) Show that E(5xx)-A5xx and E(Y)-Ao +A2. (b) Use (a) to show that E(A)-A and E(A)-A. În other words, these are unbiased estimators (c) The fitted values Yi = ArtAz; are used as estimates of E(K), and the residuals ei = Y-...
1. Functional form misspecification and RESET Consider the following model that satisfies assumption MLR.4: y=β0+β1x1+. . .+βkxk+u Which of the following describes the regression specification error test (RESET)? PICK all that apply. RESET picks up all kinds of neglected nonlinearities when more quadratic terms are added to the original model. RESET works better when there are many explanatory variables in the original model, as it increases its degrees of freedom. To implement RESET, the researcher must add at least seven...
6. Consider the following regression model without an intercept: Y = B,X, +U, One possible estimator for this model is given by: BE ANXJ Assume that you can make all of the usual ordinary least squares assumptions about the model, including the assumption that the true model does not include an intercept. Is B, an unbiased estimator? Please prove your conclusion, being sure to state the assumptions you use. [5 points]