Entropy¶
Given we have a random variable \(X\) with distribution \(p\), we can say that the entropy of \(H(X)\) is the measure of uncertainty.
It measures the information content of an event in bits (If we would use base \(e\) for the logarithm, the units we would measure entropy would be nats). And it can be viewed as the average information content of an outcome.
Properties¶
\(H(X) = 0 \iff p(x_i) =1\) for one \(i\) (in other words only one event happens all the time thus we do not gain any information if it happens).
Always positive or zero
Entropy is maximised if \(p\) is uniform
Joint entropy¶
If \(p(x,y) = p(x)p(y)\) then
Otherwise it is: $\( H(X,Y) = H(X) + H(Y|X) \)$
Binary entropy function¶
If X a bernoulli r.v with states \(X = \{0,1 \}\) with \(P(X=1) = \theta\) and \(P(X=0) = (1 - \theta)\) the entropy has a special form:
And the maximum occurs when the distribution is uniform \(\theta = 0.5\)