Do I get the right answers? If not, can someone please explain?
Do I get the right answers? If not, can someone please explain? (a) 2 points possible (graded, results hidden) Conside...
in a Bayesian view. Consider the prior π(a)-1 for all a e R Consider a Gaussian linear model Y = aX+ E Determine whether each of the following statements is true or false. π(a) a uniform prior. (1) (a) True (b) False L(Y=y14=a,X=x) (2) π(a) is a jeffreys prior when we consider the likelihood (where we assume xis known) (a) True (b)False Y-XB+ σε where ε E R" is a random vector with Consider a linear regression model E[ε1-0, E[eErJ-1....
Problem 4 True or False A Bookmark this page Instructions: Be very careful with the multiple choice questions below. Some are "choose all that apply," and many tests your knowledge of when particular statements apply As in the rest of this exam, only your last submission will count. 1 point possible (graded, results hidden) The likelihood ratio test is used to obtain a test with non-asymptotic level o True O False Submit You have used 0 of 3 attempts Save...
For z e R and θ (0, 1), define otherwise. Let X1 , . . . , X" be i..d. random variables with density f, for some unknown θ E (0, 1) 1 point possible (graded, results hidden) To prepare, sketch the pdf f, (z) for different values of θ E (0,1) Which of the following properties of fo (z) guarantee that it is a probability density? (Check all that apply) Note (added May 3) Note that you are not...
As on the previous page, let Xi,...,Xn be i.i.d. with pdf where >0 2 points possible (graded, results hidden) Assume we do not actually get to observe X, . . . , Xn. to estimate based on this new data. Instead let Yİ , . . . , Y, be our observations where Yi-l (X·S 0.5) . our goals What distribution does Yi follow? First, choose the type of the distribution: Bernoulli Poisson Norma Exponential Second, enter the parameter of...
As on the previous page, let X1,... ,Xn be iid with pdf where θ > 0. (to) 2 Possible points (qualifiable, hidden results) Assume we do not actually get to observe Xı , . . . , X. . Instead let Yı , . . . , Y, be our observations where Yi = 1 (Xi 0.5) . Our goal is to estimate 0 based on this new data. What distribution does Y follow? First, choose the type of distribution:...
please help Question 2. (2.5 points. You are considering the model Y = XB + X2B, +€, where E(€) = 0 and E(ee') = oʻI,.. Here, X, is n xp and X, is n xq, where p >1 and q> 1. Suppose that in fact, unknown to you, B, = 0. In other words, (*) is an over-parameterized model. Let e be the vector of residuals corresponding to the fitted version of *) based on the least squares method. Does...
4) Consider n data points with 2 covariates and observation {xi,i, Vi,2, yi); i -1,... ,n, where yi 's are indicator variable for the experiment that is if a particular medicine is effective on some individual. Here, xi1 and ri.2 are age and blood pressure of i th individual, respectively. Our assumption is that the log odds ratio follows a linear model. That is p-P(i-1) and 10i b) What should be a good estimator for ?,A, e) Suppose. On, A,n...
Degrees of Freedom of a Known Test 2 points possible graded) Let us consider a statistical model with parameter ER". Let O be the parameter that generates the n lid samples X1,..., X, Let I ) be the Fisher information and assume that the MLE is asymptotically normal. Assume that I(C) is a diagonal matrix with positive entries 1/t1,...,1/td. We wish to perform a test for the hypotheses H : 8 - and H:8 + . Let the test statistic...
2. Consider a simple linear regression i ion model for a response variable Y, a single predictor variable ,i1.., n, and having Gaussian (i.e. normally distributed) errors: This model is often called "regression through the origin" since E(X) = 0 if xi = 0 (a) Write down the likelihood function for the parameters β and σ2 (b) Find the MLEs for β and σ2, explicitly showing that they are unique maximizers of the likelihood function Hint: The function g(x)log(x) +1-x...
linear stat modeling & regression please , i need the solution for Q3, but i copy Q2 because you need info from Q2 in order to answer Q3. 2) Suppose you have multiple regression set up YxXBp The ridge regression estimator is given by Here, llell'-Σ.< where is a vector of Vik. a) Find the expectation and variance-covariance matrix of Bridge, when X'X is a diagonal matrix with each diagonal entry is eqal to. Com pare these variances with the...