In information theory Shannon entropy is defined by H(x) = -Sum(P(x)*log(P(x)) where P is probability mass function of random variable x, and log to base 2. Given loaded die with P(6)=0.5 and P(1)=P(2)=P(3)=P(4)=P(5), compute entropy of observed rolls 654266. Note: To answer the question you do not need to know any more information about Shannon entropy.
In information theory Shannon entropy is defined by H(x) = -Sum(P(x)*log(P(x)) where P is probability mass...
Problem 2 - Information Theory Consider the following channel X, DI . a) Calculate entropy of X, i.e., H(X) when [p(31),p(32) = (0.6,0.4). Give p(x1) and p(x2) that maximize H(X) and the corresponding maximum value of H(X). . b) Give the matrix of transition probabilities p(YX)). . c) Calculate output probabilities (p(Y)). . d) Give output probabilities so that H(Y) is maximum. For this case, then give equations with which you can calculate p(x) and p(x2). The SNR is 20...
3. Let X be the random variable characterized by a probability distribu tion p (P, ...p) and it can assume one of the values r1,....In with probabilities pi, , pni 0 < pi 1, Σǐpí = 1 The Shannon entropy of the random variable X is defined as Show that 0 s H(X) S log2 so that the maximal value in this inequality is saturated by unifornm probability distribution. What is the level of information for this state of the...
This is for an Information Theory class. H(X) is entropy rate. Problem 8: Suppose that X is a random variable with a probability that X = k) given by: probability distribution (i.e., Px (k) = Prob(X = k) = (1 - ) )X* for k0 where 0 < 1 and k is a non-negative integer (and hence X can take any negative integer value). To answer this question, note that the AEP theorem we proved for a finite-alphabet random variable...
I. The random variables X,, where P(success) = P(X = 1) = p = 1-P(X = 0) for1,2,..., represent a series of independent Bernoulli trials. Let the random variable Y be the trial number on which the first success is achieved (a) Explain why the probability mass function of Y is f(y) = pqy-1, y = 12. where q 1- p. State the distribution of Y. 2 part of your answer you should verify this is a marimum likelihood estima-...
2. Let X be a Bernoulli random variable with probability of X -1 being a. a) Write down the probability mass function p(X) of X in terms of a. Mark the range of a (b) Find the mean value mx(a) EX] of X, as a function of a (c) Find the variance σ剤a) IX-mx)2) of X, as a function of a. (d) Consider another random variable Y as a function of X: Y = g(X) =-log p(X) where the binary...
Consider a random variable X with the following probability mass function P(X=0)=0.25, P(X=5)=0.5, P(X=12)=0.25. What is the expected value (or mean) of X?
The moment generating function ф(t) of random variable X is defined for all values of t by et*p(x), if X is discrete e f (x)dx, if X is continus (a) Find the moment generating function of a Binomial random variable X with parameters n (the total number of trials) and p (the probability of success). (b) If X and Y are independent Binomial random variables with parameters (n1 p) and (n2, p), respectively, then what is the distribution of X...
The probability mass function for the discrete random variable X is p(X=0)=0.13; p(X=1)=0.31; p(X=2)=m. What is the expected value of X? Hint: First compute m, then find the expectation of X. Round your answer to the nearest hundredth.
2. Let X be a Bemoulli random variable. The probability mass function is f(p) p(1 p when x 0 or x 1, where p is the parameter to be estimated. Please declare the MLE, and workout the steps to solve it 2. Let X be a Bemoulli random variable. The probability mass function is f(p) p(1 p when x 0 or x 1, where p is the parameter to be estimated. Please declare the MLE, and workout the steps to...
6. (Entropy) The Bernoulli random variable X takes on the values 0, 1 with equal probability, i.e. PX pX Compute El(x) if where logs are to base 2