**Contents:**

The terms “fixed effects” and “random effects” mean something slightly different depending on who is using the terms and what field you’re working in. For example:

- In ANOVA and regression analysis, it may refer to how particular variables behave: they are either fixed (like skin color) or random (weather on a particular day). This definition is the main focus of this article.
- Alternatively, it can mean the process of “fixing” random variables (as in “fixed effects regression”).
- In hierarchical (multilevel) modeling and econometrics, the terms are defined quite differently: fixed effects are estimated using least squares (or maximum likelihood) and random effects are estimated with shrinkage.

There are other less common definitions (which you can find below).

## Fixed Effects

**Fixed effects**are variables that are

**constant across individuals**; these variables, like age, sex, or ethnicity, don’t change or change at a constant rate over time. They have fixed effects; in other words, any change they cause to an individual is the same. For example, any effects from being a woman, a person of color, or a 17-year-old will not change over time.

It could be argued that these variables *could* change over time. For example, take women in the workplace: Forbes reports that the glass ceiling is cracking. However, the wheels of change are extremely slow (there was a 26 year gap between Britain’s first woman prime minister, Margaret Thatcher, and the country’s second woman prime minister, Theresa May). Therefore, for purposes of research and experimental design, these variables are treated as a constant.

## Random Effects

The opposite of fixed effects are random effects. These variables are—like the name suggests—random and unpredictable; they are literally **random effects**. Examples:

- The price for a three-course-dinner varies wildly depending on location (e.g. Yulee, Florida will be a lot cheaper than New York City).
- The cost of a new car varies depending on what year it was purchased (e.g. 1941 vs. 2018).

## Limitations of Fixed Effects Models

In a fixed effects model, random variables are treated as though they were non random, or fixed. For example, in regression analysis, “fixed effects” regression fixes (holds constant) average effects for whatever variable you think might affect the outcome of your analysis.

Fixed effects models do have some **limitations**. For example, they can’t control for variables that vary over time (like income level or employment status). However, these variables can be included in the model by including dummy variables for time or space units. This may seem like a good idea, but the more dummy variables you introduce, the more the “noise” in the model is controlled for; this could lead to over-dampening the model, reducing the useful as well as the useless information.

## Mixed Model

A mixture between fixed effects and random effects model is called a **mixed effects model.**

## Omitted Variable Bias

In research, one way to control for differences between subjects (i.e. to “fix” the effects) is to randomly assign the participants to treatment groups and control groups. For example, one difference could be age, but by randomly assigning participants you control for age across groups. In real life, it’s difficult or impossible to randomly assign participants (or treatments), so these variables (like age) must be measured instead. Ultimately, it’s not possible to control for *all *possible variables and research results can be contaminated with these hidden variables. This contamination of results is called **omitted variable bias**.

Fixed effects models remove **omitted variable bias** by measuring changes within groups across time, usually by including dummy variables for the missing or unknown characteristics.

## Alternate Definitions

Several alternate definitions exist for “fixed effects” and “random effects”. As Andrew Gelman & Jennifer Hill (2007, p. 245) point out, other definitions include:

- Searle, Casella, and McCulloch’s definition of fixed variables is “interesting in themselves” and random variables are an “interest in the underlying population.”
- Green and Tukey’s 1960 definition of a fixed variable is one that “exhausts the population” while a random one arises from a sample representing only a small part of the population. This is very similar to the idea of a population parameter (which is essentially fixed) and a sample statistic (which can vary wildly depending on the sample size).
- LaMotte’s definition is “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” On the surface, this makes sense: random data can only produce random results. However, this particular definition is very different from the others.

Don’t be surprised about all of the different definitions; **it’s common in statistics for one person to say one thing and another to mean something different. ** If you’re reading a text about fixed or random effects, and are confused about which term the author is referring to, I have a few suggestions:

- Consider what field you are in, and what the most commonplace definition is within your field. Try a more specific search, like “Fixed Effects Econometrics”.
- If you can, try to understand the mathematics behind what the author is trying to accomplish. A lot of the time, what counts is the mathematical formula or procedure, not what the author is calling it.
- If you find an effect that’s interesting (in the general sense of the word), don’t assume that you have to perform multilevel modeling on it. It may be enough (depending on what field you’re in) to just mark it as “interesting.”

## References

Forbes. (2016) Female Leadership; the glass ceiling is cracked, not broken. Retrieved May 23, 2018 from: https://www.forbes.com/sites/amyjadesimi/2016/08/08/female-leadership-the-glass-ceiling-is-cracked-not-broken/#482dfa82698b

Gelman, A. & Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press.

Green, B. F., and Tukey, J. W. (1960). Complex analyses of variance: general problems. {\em Psychometrika} {\bf 25} 127–152.

Kreft, I., and De Leeuw, J. (1998). Introducing Multilevel Modeling. London: Sage. Retrieved November 17, 2017 from: http://gifi.stat.ucla.edu/janspubs/1998/books/kreft_deleeuw_B_98.pdf

LaMotte, L. R. (1983). Fixed-, random-, and mixed-effects models. In Encyclopedia of Statistical Sciences, ed. S. Kotz, N. L. Johnson, and C. B. Read, 137–141.

Searle, S. R., Casella, G., and McCulloch, C. E. (1992). {\em Variance Components}. New York: Wiley.

**Need help with a homework or test question?** Chegg offers 30 minutes of free tutoring, so you can try them out before committing to a subscription. Click here for more details.

If you prefer an **online interactive environment** to learn R and statistics, this *free R Tutorial by Datacamp* is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try *this Statistics with R track*.

*Facebook page*.