Consider the simple linear regression model: with σ2 is known. Assume x's are fixed and known, an...
2. Consider a simple linear regression i ion model for a response variable Y, a single predictor variable ,i1.., n, and having Gaussian (i.e. normally distributed) errors: This model is often called "regression through the origin" since E(X) = 0 if xi = 0 (a) Write down the likelihood function for the parameters β and σ2 (b) Find the MLEs for β and σ2, explicitly showing that they are unique maximizers of the likelihood function Hint: The function g(x)log(x) +1-x...
3. Consider the linear model: Yİ , n where E(Ei)-0. Further α +Ari + Ei for i 1, assume that Σ.r.-0 and Σ r-n. (a) Show that the least square estimates (LSEs) of α and ß are given by à--Ỹ and (b) Show that the LSEs in (a) are unbiased. (c) Assume that E(e-σ2 Yi and E(49)-0 for all i where σ2 > 0. Show that V(β)--and (d) Use (b) and (c) above to show that the LSEs are consistent...
2. Consider a simple linear regression model for a response variable Yi, a single predictor variable ri, i-1,... , n, and having Gaussian (i.e. normally distributed) errors Ý,-BzitEj, Ejį.i.d. N(0, σ2) This model is often called "regression through the origin" since E(Yi) 0 if xi 0 (a) Write down the likelihood function for the parameters β and σ2 (b) Find the MLEs for β and σ2, explicitly showing that they are unique maximizers of the likelihood function. (Hint: The function...
A simple linear regression model is given as follows Yi = Bo + B1Xi+ €i, for i = 1, ...,n, where are i.i.d. following N (0, o2) distribution. It is known that x4 n, and x = 0, otherwise. Denote by n2 = n - ni, Ji = 1 yi, and j2 = 1 1. for i = 1, ... ,n1 < n2 Lizn1+1 Yi. n1 Zi=1 1. Find the least squares estimators of Bo and 31, in terms of...
In the simple linear regression with zero-constant item for (xi , yi) where i = 1, 2, · · · , n, Yi = βxi + i where {i} n i=1 are i.i.d. N(0, σ2 ). (a) Derive the normal equation that the LS estimator, βˆ, satisfies. (b) Show that the LS estimator of β is given by βˆ = Pn i=1 P xiYi n i=1 x 2 i . (c) Show that E(βˆ) = β, V ar(βˆ) = σ...
(Do this problem without using R) Consider the simple linear regression model y =β0 + β1x + ε, where the errors are independent and normally distributed, with mean zero and constant variance σ2. Suppose we observe 4 observations x = (1, 1, −1, −1) and y = (5, 3, 4, 0). (a) Fit the simple linear regression model to this data and report the fitted regression line. (b) Carry out a test of hypotheses using α = 0.05 to determine...
linear stat modeling & regression
1) Consider n data points with 3 covariates and observations {xn, ^i2, xi3,yid; i,,n, and you fit the following model, y Bi+Br2+Br+e that is yi A) +Ari,1 +Ari,2 +Buri,3 + єї where є,'s are independent normal distribution with mean zero and variance ơ2 . H the vectors of (Y1, . . . ,Yn). Assume the covariates are centered: Σίχί,,-0, k = 1,2,3. ere, n = 50, Let L are Assume, X'X is a diagonal matrix...
in a Bayesian view. Consider the prior π(a)-1 for all a e R Consider a Gaussian linear model Y = aX+ E Determine whether each of the following statements is true or false. π(a) a uniform prior. (1) (a) True (b) False L(Y=y14=a,X=x) (2) π(a) is a jeffreys prior when we consider the likelihood (where we assume xis known) (a) True (b)False Y-XB+ σε where ε E R" is a random vector with Consider a linear regression model E[ε1-0, E[eErJ-1....
Question 2: Hypothesis testing (30 pts) Consider the following simple linear regression model with E[G-0 and var(G)-σ2. The output of linear where €1, €2, . . . ,en regression from R takes the form are i.i.d. errors Cal1: lm(formula y ~ x + 1) Residuals: Min 1Q Median 3Q Max 2.0606-0.3287-0.1148 0.5902 1.2809 Coefficients: Estimate Std. Error t value Prlt (Intercept) 0.507932 0.340896 1.49 0.147 0.049656 0.003455 14.37 1.89e-14 Signif. codes: 0.0010.010.05 .'0.1''1 Residual standard error: 0.7911 on 28 degrees...