for query, comment.
A simple linear regression model is given as follows Yi = Bo + B1Xi+ €i, for...
1. Consider the simple linear regression model: Ү, — Во + B а; + Ei, where 1, . . , En are i.i.d. N(0,02), for i1,2,... ,n. Let b1 = s^y/8r and bo = Y - b1 t be the least squared estimators of B1 and Bo, respectively. We showed in class, that N(B; 02/) Y~N(BoB1 T;o2/n) and bi ~ are uncorrelated, i.e. o{Y;b} We also showed in class that bi and Y 0. = (a) Show that bo is...
7.22. In the regression model Y; = Bo + B1Xi + B2(3X} – 2) +Ei, i = 1,2,3, with X1 = -1, X2 = 0, and X3 = 1, what happens to the least squares estimates of Bo and B1 when B2 = 0? Why?
2. Consider the simple linear regression model: where e1, .. . , es, are i.i.d. N (0, o2), for i= 1,2,... , n. Suppose that we would like to estimate the mean response at x = x*, that is we want to estimate lyx=* = Bo + B1 x*. The least squares estimator for /uyx* is = bo bi x*, where bo, b1 are the least squares estimators for Bo, Bi. ayx= (a) Show that the least squares estimator for...
1. Suppose that Yi = Bo + B1Xi + €¡ where ; is N(0,0.6), Bo = 2 and 31 = 1. (a) What are the conditional mean and standard deviation of Yị given that Xi = 1? What is P(Yi < 3|X; = 1)? (b) A regression model is a model for the conditional distribution of Yị given Xị. However, if we also have a model for the marginal distribution of X; then we can find the marginal distribution of...
Exercise 5 Consider a linear model with n = 2m in which Yi = Bo + Bici + Eigi = 1,..., m, and Yi = Bo + B2X1 + Ei, i = m + 1, ...,n. Here €1,..., En are i.i.d. from N(0,0), B = (Bo, B1, B2)' and o2 are unknown parameters, X1, ..., Xn are known constants with X1 + ... + Xm = Xm+1 + ... + Xn = 0. 1. Write the model in vector form...
Consider the regression equation Y = Bo+B1Xi+u; where E[u;|Xi]=0 for all i = 1, ..., n. Let B 1 be the OLS estimator for B 1. Which statement is the most irrelevant to the consistency of B1? Hint: see Lecture Note 2 (p.25-p.28) a. When n is large, the estimator B 1 is near the population parameter B1 O". Consistency of B1 is mathematically written as B1-B1 VB) is inversely proportional to the sample size n. Od. RMSE is close...
Consider the simple linear regression model: Yi = Bo + Bilitei, i = 1,...,n. with the least squares estimates ỘT = (Bo ß1). We observe a new value of the predictor: x] = (1 xo). Show that the expression for the 100(1 - a)% prediction interval reduces to the following: . (xo – x2 Ēo + @130 Etap 11+ntan (x; – 7)2
Consider the regression model: yi = β0 + β1Xi + εi for…. i = 1 Where the dummy variable (0 = failure and 1 = success). Suppose that the data set contains n1 failure and n2 successes (and that n1+n2 = n) Obtain the X^T(X) matrix Obtain the X^T(Y) matrix Obtain the least square estimate b
3. Consider simple linear regression model yi = Bo + B12; + &; and B. parameter estimate of the slope coefficient Bi: Find the expectation and variance of 31. Is parameter estimate B1 a) unbiased? b) linear on y? c) effective optimal in terms of variance)? What will be your answers if you know that there is no intercept coefficient in your model?
Please solve the question
Simulation: Assume the simple linear regression model i = 1,... , n Ул 3D Во + B1; + ei, N(0, o2) for i = 1,...,n. where e Let's set Bo = 10, B1 = -2.5, and n = 30 (a) Set a = 100, and x; = i for i = 1,...,n. (b) Your simulation will have 10,000 iterations. Before you start your iterations, set a random seed using your birthday date (MMDD) and report the...