In many cases, finding an appropriate sample size results in a sample size that’s too large. You may not have the resources to conduct a large study, or ethical reasons may prevent testing on a large scale. Reducing sample size usually involves some compromise, like accepting a small loss in power or modifying your test design.

## Ways to Significantly Reduce Sample Size

Of the many ways to reduce sample size, only a few are likely to result in a significant reduction (by 25% or more).

- Reduce Alpha Level to 10%
- Reduce Statistical Power to 70%
- Add an extra ARM (to a crossover study)
- Use paired tests instead of independent tests

## 1. Reduce the Alpha Level to 10%

The Alpha Level is the chance that you’ll find a significant result when one doesn’t exist (called a Type I error). It is usually set by the researcher, and you’ll ideally want to make this as small as possible. However, smaller alpha levels result in larger sample sizes. But the revserse is also true: **larger alpha levels lead to smaller sample sizes**. For example, an alpha level of 10% will need a much smaller sample than a test using α = 1%.

**Note**: In many cases (e.g. if you’re publishing in a well-known journal or are looking for FDA approval), you may be required to set the alpha level at 5% or lower.

## 2. Reduce Statistical Power to .7 (70%)

One of the biggest issues with reducing sample size is the loss of statistical power. You may want to reduce your sample size due budget constraints, ethical issues or other reasons. If you decide that a smaller sample is “good enough”, you run the risk that you will fail to find a significant result, even if one does exist. In other words, working with an appropriate sample size increases the likelihood that your experiment will produce meaningful results.

Statistical power is rejecting the null hypotheses when it *should *be rejected. Let’s say that you are testing if two drugs A and B can treat the common cold. You find that drug A is better than drug B. The null hypothesis that both drugs are the same is correctly rejected. If your test has 100,000 people you can likely be very confident that drug A is indeed the better drug. But if your sample size is just two patients then — even though you correctly rejected the null hypothesis — the statistical power will be so low that you won’t be able to have confidence in those results.

In order to reduce sample size, the obvious solution would be to **decrease the statistical power of your test**. This is the same as **increasing the beta level** (because the power of a test is 1 – Β). How low can you go without running the risk of meaningless results? According to Mann et. al (1991), a “reasonable” power level is 0.7 to 0.9. Therefore, **for minimum reasonable sample sizes, aim for a power closer to 0.7.**

## 3. Add an extra ARM to your Crossover Study

A crossover design (a type of repeated measures design) is where patients are assigned all treatments, and the results are measured over time. The standard AB/BA design usually requires a large sample size. **Adding extra ARMs can reduce sample size by up to 50% (Julious, 2009; Liu, 1995):**

- ABB/BAA: up to a 25% reduction in sample size.
- ABBA/BAAB: up to a 50% reduction in sample size.

## 4. Use paired tests instead of independent samples tests

If you use a paired test, you basically test the same group twice (which effectively **cuts your sample size in half.** The paired samples test does have some limitations. In particular, carryover effects and practice effects may become an issue. Degrees of freedom are significantly lower, which results in a higher cut-off value for a test; in other words, you may not find an effect when there actually is one.

## Other ways to potentially reduce sample size

Some of the following methods may significantly reduce your sample size, but a lot depends on how they are implemented.

- Reduce the nonresponse rate
- Use Prior Studies
- Stratify the Population

## 5. Reduce the Nonresponse rate

A study that has a nonresponse bias of 50% will need a huge sample size in comparison to one with a nonresponse rate of 1%. **Putting resources into follow up can reduce the nonresponse rate and in turn, reduce your sample size.** Henry (1990) suggests allocating funds earmarked for data collection into intensive follow up instead. Exactly how you go about this is likely to be specific to your area of study, and you might want to look at published studies specific to your field. For example, one article in the Journal of Extension reported reducing the sample size from 552 to 174, in part by mail and telephone follow ups for non-respondents.

## 6. Use Prior Studies

In many fields, it’s highly likely that someone, somewhere has performed a similar study. If so, you can use prior mean and variance estimates to reduce sample sizes.

## 7. Stratify the Population

Stratifying the population reduces variation within groups. Stratified random sampling is very similar to random sampling. However, these samples are more difficult to create as you must have detailed information about what categories your population falls into.

**References:**

Henry, G. (1990). Practical Sampling. SAGE.

Journal of Extension (Feb, 1996). Retrieved August 8, 2019 from: https://www.joe.org/joe/1996february/a2.php

Julious, S. (2009). Sample Sizes for Clinical Trials. CRC Press.

Liu, J.P. (1995). Use of the repeated cross-over designs in assessing bioequivalence. Statistics in Medicine 14:1067-1078.

Mann, M.D. et. al. (1991). Appropriate animal numbers in biomedical research in light of animal welfare considerations. Lab. anim. Sci. 41, 6-14.

Verma, S. et. al. (1996). Cutting Evaluation Costs by Reducing Sample Size. February. Volume 34. Number 1