Statistics How To

Posterior Probability & the Posterior Distribution

Probability > Posterior Probability & the Posterior Distribution

What is Posterior Probability?

posterior probability

Posterior probabilities are used in Bayesian hypothesis testing. Image: Los Alamos National Lab.

Posterior probability is the probability an event will happen after all evidence or background information has been taken into account. It is closely related to prior probability, which is the probability an event will happen before you taken any new evidence into account. You can think of posterior probability as an adjustment on prior probability:

Posterior probability = prior probability + new evidence (called likelihood).

For example, historical data suggests that around 60% of students who start college will graduate within 6 years. This is the prior probability. However, you think that figure is actually much lower, so set out to collect new data. The evidence you collect suggests that the true figure is actually closer to 50%; This is the posterior probability.

Origin of the Terms

The words posterior and prior come from the latin a priori. The definition of “a priori” is:

“…relating to what can be known through an understanding of how certain things work [i.e. a hypothesis] rather than by observation” ~ Miriam Webster.

The opposite of “a priori” is a posteriori, which is defined as:

“… relating to what can be known by observation rather than through an understanding of how certain things work” ~ Miriam Webster.

What is a Posterior Distribution?

The posterior distribution is a way to summarize what we know about uncertain quantities in Bayesian analysis. It is a combination of the prior distribution and the likelihood function, which tells you what information is contained in your observed data (the “new evidence”). In other words, the posterior distribution summarizes what you know after the data has been observed. The summary of the evidence from the new observations is the likelihood function.

Posterior Distribution = Prior Distribution + Likelihood Function (“new evidence”)

Posterior distributions are vitally important in Bayesian Analysis. They are in many ways the goal of the analysis and can give you:


If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.

Comments are now closed for this post. Need help or want to post a correction? Please post a comment on our Facebook page and I'll do my best to help!
Posterior Probability & the Posterior Distribution was last modified: October 12th, 2017 by Stephanie Glen