(a) 1.The Binomial Probability Distribution experiment consists of a sequence of n smaller experiments called trials, where n is fixed in advance of the experiment.
2. Each trial can result in one of the same two possible
outcomes.which we denote by success (S) or failure (F).
3. The trials are independent, so that the outcome on any
particular trial does not influence the outcome on any other
trial.
4. The probability of success is constant from trial to trial (homogeneous trials); we denote this probability by p.
(b) For X ~ Bin(n, p), then E[X]= np, V[X] = np(1 – p) = npq (where q = 1 – p).
(c) Yes,p hat is an unbiased estimator for population proportion p since the mean of sampling distribution (p hat) is always equal to p
Refer the attached image
3. Let X~ Bin(n,p) with n known (a) State the parameter space for the mode b)...
1. Let X b(n , 0 ), find the maximum likelihood estimate of the parameter 0 of the " corresponding binomial distribution. And prove the sample proportion is unbiased estimator of 0. 2. If are the values of a random sample from an exponential population, find the maximum likelihood estimator of its parameter 0. 1. Let X b(n , 0 ), find the maximum likelihood estimate of the parameter 0 of the " corresponding binomial distribution. And prove the sample...
2. Let X~Bin(n, p) with n known. State whether the following expressions are statistics or not. If an expression is not a statistic, explain why. (a) The number of successes X observed in n trials The sample proportion of successes D (c) z -, where X ~ N(5,4) p-P (d)-p I-D
3. Suppose you have X Binom(n, p) where n is known and p is unknown. Typically, people use p = _ to estimate p, where X = Xi is simply a sample of size l. This might represent simultaneously flipping n coins (just once!) and counting the number of heads you see, where each coin has Pheads - p. Now, if both n and p are known, we know the variance V of X is just np(1-p) If p is...
Question 3: A random variable X has a Bernoulli distribution with parameter θ є (0,1) if X {0,1} and P(X-1)-θ. Suppose that we have nd random variables y, x, following a Bernoulli(0) distribution and observed values y1,... . Jn a) Show that EIX) θ and Var[X] θ(1-0). b) Let θ = ỹ = (yit . .-+ yn)/n. Show that θ is unbiased for θ and compute its variance. c) Let θ-(yit . . . +yn + 1)/(n + 2) (this...
Ex (5) Let X = (Xi, X2, ,X") be a random sample with size n taken from population has e-부) a) 71 2 X is an unbiased estimator of τ (θ)-2(J+ b) T-X is a consistent estimator of τ (9) (J+ β fx(x ; θ) , β < x <。。.Show that 2)
Let X1, ..., X., be i.i.d random variables N(u, 02) where u is known parameter and o2 is the unknown parameter. Let y() = 02. (i) Find the CRLB for yo?). (ii) Recall that S2 is an unbiased estimator for o2. Compare the Var(S2) to that of the CRLB for
2 Suppose that we observe the continuous random variable X (X1,.., Xn) with state space S, whose distribution we do not know but we are assuming that its p.d.f. belongs to a known family of distributions {fe;Be Θ). We construct an estimator for the unknown parameter θ(X) (a) Explain why it is wrong to write E(ex) and correct it. 12 marks (b) Explain the difference between pdf and likelihood function. [1 mark] (c) Explain the different between estimate and estimator....
(2) Let Y be a binomial random variable with parameters n and p. Remember that E(Y) V(Y)p1 -p) We know that Y/n is an unbiased estimator of p. Now we want to estimate the variance of Y with n(2(1 (a) Find the expected value of this estimator (b) Find an unbiased estimator that is a simple modification of the proposed estimator
this is a challenging question Let X ~ POI(μ), and let θ-P(X = 0-e-". (a) Is -e-r an unbiased estimator of θ? (b) Show that θ = u(X) is an unbiased estimator of θ, where u(0) 1 and u(x)-0 if (c) Compare the MSEs of, and è for estimating θ-e-, when μ 1 and 2. Let X ~ POI(μ), and let θ-P(X = 0-e-". (a) Is -e-r an unbiased estimator of θ? (b) Show that θ = u(X) is an...
B2. (a) Suppose θ is an unknown parameter which is to be estimated from a single measurement X, distributed according to some probability density function f(r0). The Fisher information I(0) is defined by de Show that, under some suitable regularity conditions, the variance of any unbi- ased estimator θ of θ is then bounded by the reciprocal of the Fisher information Var | θ 1(8) Note that the suitable regularity conditions, which are not specified here, allow the interchange of...