False Positive and False Negative: Definition and Examples

Statistics Definitions > False Positive and False Negative

false positive
Image:Nabokov|Wikimedia Commons

What is a False Positive?

A false positive is where you receive a positive result for a test, when you should have received a negative results. It’s sometimes called a “false alarm” or “false positive error.” It’s usually used in the medical field, but it can also apply to other arenas (like software testing). Some examples of false positives:

  • A pregnancy test is positive, when in fact you aren’t pregnant.
  • A cancer screening test comes back positive, but you don’t have the disease.
  • A prenatal test comes back positive for Down’s Syndrome, when your fetus does not have the disorder(1).
  • Virus software on your computer incorrectly identifies a harmless program as a malicious one.

False positives can be worrisome, especially when it comes to medical tests. Researchers are consistently trying to identify reasons for false positives in order to make tests more sensitive.

A related concept is a false negative, where you receive a negative result when you should have received a positive one. For example, a pregnancy test may come back negative even though you are in fact pregnant.

The False Positive Paradox

If a test for a disease is 99% accurate and you receive a positive result, what are the odds that you actually have the disease?

If you said 99%, you might be surprised to learn you’re wrong. If the disease is very common, your odds might approach 99%. But the rarer the disease, the less accurate the test and the lower the odds that you actually have the disease. The difference can be quite dramatic. For example, if you test positive for a rare disease (one that affects, say, 1 in 1,000 people), your odds might be less than percent of actually having the disease! The reason involves conditional probability.

False Positives and Type I errors

In statistics, a false positive is usually called a Type I error. A type I error is when you incorrectly reject the null hypothesis. This creates a “false positive” for your research, leading you to believe that your hypothesis (i.e. the alternate hypothesis) is true, when in fact it isn’t.

The Drug Test Paradox and HIV Tests

drug test paradox
Just LOOKING at a picture like this makes me feel nervous!

You take an HIV test that is 99% accurate and the test is positive. What is the probability that you are HIV positive?

  1. Pretty high: 99%. I’m freaking out.
  2. Pretty low. Probably about 1 in 100. I’ll sleep on it and then take the test again.


If you answered 1(99%), you’re wrong. But don’t worry — you aren’t alone. Most people will answer the same way as you. But the fact is (assuming you are in a low risk group), you only have a very slim chance of actually having the virus, even if you test positive for the HIV test. That’s what’s called the drug test paradox.

How?

An HIV test (or any other test for diseases for that matter) isn’t 99% accurate for you, it’s 99% accurate for a population.* Let’s say there are 100,000 people in a population and one person has the HIV virus. That one person with HIV will probably test positive for the virus (with the test’s 99% accuracy). But what about the other 99,999? The test will get it wrong 1% of the time, meaning that out of 99,999 who do not have HIV, about 100 will test positive.

In other words, if 100,000 people take the test, 101 will test positive but only one will actually have the virus.

Don’t worry if this paradox is a little mind-bending. Even physicians get it wrong. There have been several studies that show physicians often alarm patients by informing them they have a much higher risk of a certain disease than is actually indicated by the statistics (see this article in U.S. News).

Peter Donnely is an English statistician who included the above information in a really fascinating TED Talk about how people are fooled by statistics. If you haven’t seen it, it’s worth a look, especially as he highlights the problem with juries misunderstanding statistics:

Peter Donnelly: How stats fool juries

*These figures aren’t exactly accurate — the actual prevalence of HIV in a population depends on your lifestyle and other risk factors. At the end of 2008, there were about 1.2 million people with HIV in the U.S. out of a total population of 304,059,724. Additionally, most HIV tests are now 99.9% accurate.

What is a False Negative?

false negative
Just because a test says it’s negative, doesn’t mean it’s 100% accurate. Image: University of Iowa



A false negative is where a negative test result is wrong. In other words, you get a negative test result, but you should have got a positive test result. For example, you might take a pregnancy test and it comes back as negative (not pregnant). However, you are in fact, pregnant. The false negative with a pregnancy test could be due to taking the test too early, using diluted urine, or checking the results too soon. Just about every medical test comes with the risk of a false negative. For example, a test for cancer might come back negative, when in reality you actually have the disease. False negatives can also happen in other areas, like:

  • Quality control in manufacturing; a false negative in this area means that a defective item passes through the cracks.
  • In software testing, a false negative would mean that a test designed to catch something (i.e. a virus) has failed.
  • In the Justice System, a false negative occurs when a guilty suspect is found “Not Guilty” and allowed to walk free.

False negatives create two problems. The first is a false sense of security. For example, if your manufacturing line doesn’t catch your defective items, you may think the process is running more effectively than it actually is. The second, potentially more serious issue, is that potentially dangerous situations may be missed. For example, a crippling computer virus can wreak havoc if not detected, or an individual with cancer may not receive timely treatment.

False Negatives in Hypothesis Testing

False negatives can occur when running a hypothesis test. If you erroneously receive a negative result and don’t reject the null hypothesis (when you should), this is known as a Type II error.

References

Beyer, W. H. CRC Standard Mathematical Tables, 31st ed. Boca Raton, FL: CRC Press, pp. 536 and 571, 2002.
Agresti A. (1990) Categorical Data Analysis. John Wiley and Sons, New York.
Vogt, W.P. (2005). Dictionary of Statistics & Methodology: A Nontechnical Guide for the Social Sciences. SAGE.
Wheelan, C. (2014). Naked Statistics. W. W. Norton & Company


Comments? Need to post a correction? Please Contact Us.