Computing the marginal likelihood (evidence)¶
In general we are not interested in the evidnece, and computing can be really hard. But if we use conjugate priors it is easy to compute them:
Let \(p(\theta) = q(\theta)/Z_0\) be our prior where \(q(\theta)\) is an unormalized distribution, and \(Z_0\) is the normalization constant for the Prior.
Let \(p(D|\theta) = q(D|\theta)/Z_l\) be the likelhood, where \(Z_l\) contains any constant factors in the likelihood.
Let \(p(\theta|D) = q(\theta|D) /{Z_N}\) be our posterior where \(q(\theta|D) = q(D|\theta)q(\theta)\) is the unnormalized posterior:
Then we get: $\(p(\theta|D) = \frac{p(D|\theta) p(\theta)}{p(D)} \)$
Beta binomial model¶
Dirichlet multinomial¶
Gauss-Wishart¶
Bic approximation of marginal likelihood¶
Computing the marginal likelihood can be diffucult but we can apporximate it using Bayesian information criterion:
Where:
\(dof(\hat{\theta})\) is the number of degrees of freedom in a model
\(\hat{\theta}\) is the MLE of the model.
We can see that this has the form of a penalized negative log likelihood, where the penalty depends on the model complexity.
Effect of the prior¶
In general prior has a little influence especially if we have a lot of data. Unfortunatelly this is not true if we talk about marginal likelihood. Since here we have to average the likelihood over all possible parameter settings as weighted by the prior.
We can avoid this problem by defining hyper priors (priors for our priors). This models our uncertainity in our prior, and it is the base for Hierarchical bayesian modelling. Here the influence of hyper priors is relativelly low, and it is common to use uninformative priors.