Conservative in Statistics

Statistics Definitions > Conservative

What does “Conservative” mean in Statistics?

Conservative in statistics has the same general meaning as in other areas: avoiding excess by erring on the side of caution. In statistics, “conservative” specifically refers to being cautious when it comes to hypothesis tests, test results, or confidence intervals. Reporting conservatively means that you’re less likely to be giving out the wrong information.

Conservative Tests and Confidence Intervals

A conservative test always keep the probability of rejecting the null hypothesis well below the significance level. Let’s say you’re running a hypothesis test where you set the alpha level at 5%. That means that the test will (falsely) give you a significant result 1 out of 20 times. This is called the Type I error rate. A conservative test would always control the Type I error rate at a level much smaller than 5%, which means your chance of getting it wrong will be well below 5% (perhaps 2%).*

On the other hand, a liberal test would be more likely to find a statistically significant result. In other words, it has more power. But “power” isn’t necessarily a good thing. In practice, liberal tests are rarely acceptable, because having a very high chance that the significant result you’re reporting is wrong can be a very expensive (and, in the case of drug testing, possibly dangerous) mistake. For example, you test drug A to see if it cures cancer and report that it does so at a 5% alpha level. However, you’re running a liberal test, so that 5% might actually be a 25% chance that the result is wrong. This results in more (expensive) tests to try and duplicate your results. Or, perhaps more scarily, clinical trials to actually test out the drug on people. On the other hand, if your chances of being wrong are conservatively reported at 5% but the odds are actually much lower — say 3% — then all of those expensive and potentially dangerous re-tests will be worthwhile.

When it comes to results, a conservative confidence interval would actually have a higher likelihood of containing the results. For example, a stated 95% confidence interval might actually be a 97% or 98% confidence interval, but you’re being “conservative” in stating 95%.

*Note: this begs the question — Why not keep the error rate as small as possible anyway? Well, there are always trade-offs in statistics. Generally, if you keep the Type I error rate very small, then you’re going to increase a different type of error rate (Called a Type II).


Comments? Need to post a correction? Please Contact Us.