Question

Derive the OLS estimator β₀ in the regression model yi=β₀+ui.

Derive the OLS estimator \hat{β}₀ in the regression model yi=β₀+ui. Show all of the steps in your derivation.


2 0
Add a comment Improve this question Transcribed image text
✔ Recommended Answer
Answer #1

To drive this estimator, we will use the method of moments as it does not use a shred of calculus. In this method, we can identify one parameter for every constraint that we impose on the data.

Firstly we assume our model, then use that assumption as a rule. We make it accurate in our data instead of just taking it as true. The data that we have to make them obey the law. Each rule we impose gives us the ability to say what one parameter must be for the government to hold.

Univariate Regression Model or a model with no \(\mathrm{x}\)-variable: A model where \(\mathrm{y}\) is always equal to constant

$$ y=\beta 0+\mu $$

\(\beta 0\) is umknown parameter in thie model. Therefore we need one assumption \(-\) one rule \(-\) to identify \(\beta 0\). We will impose this rule on the data, and this will give us the formula for our best estimate of \(\beta 0\), which we will denote \(\widehat{\beta 0}\).

The standard assumption we will use is \(E(\mu)=0\), the assumption that the model is correct, on average.

This method works by imposing this assumption on the data. So we start with

$$ E(\mu)=0 $$

and we make this true in the sample of data that we have.

$$ \mu=y-\beta 0 $$

So that

$$ E(\mu)=E(y-\beta 0) $$

It means taking the concept of "expectation," which is a concept about the universe of data, and applying the analogous concept to our sample of data. The mean or average is the sample analog of the expectation. The sampled analog of the expected value of something is the mean of that same thing.

So if the theoretical expected value of a variable \(x\) is \(\mu x\), then the sample analog of \(\mu x\) is \(\bar{x}\). Making \(E(\mu)=0\) true in our data translates to forcing the average of \(u\) in our data to equal zero.

Putting this notion in equation (1), we get

$$ \begin{aligned} &E(\mu)=0=E(y-\beta 0) \\ &=E(y)-E(\beta 0) \\ &=1 / N \sum y i-1 / N \sum \beta 0(2) \\ &=\bar{y}-1 / N * N \beta 0 \\ &=\bar{y}-\beta 0 \text { (3) } \end{aligned} $$

We now have a single, simple equation with a single unknown. We choose our estimate of \(\beta 0\), which we call \(\widehat{\beta 0}\), to be the estimate that solves equation (3).

$$ \begin{aligned} &0=\bar{y}-\widehat{\beta 0}(4) \\ &\widehat{\beta 0}=\bar{y}(5) \end{aligned} $$

The best estimate of a variable \(y\) that we can manage when we model \(y\) as a constant is \(\widehat{\beta 0}=\bar{y}\), the mean of \(y\).

Add a comment
Know the answer?
Add Answer to:
Derive the OLS estimator β₀ in the regression model yi=β₀+ui.
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT