Statistics Definitions > Serial Correlation / Autocorrelation
What is Serial Correlation / Autocorrelation?
Autocorrelation, also known as serial correlation, measures the correlation between observations of a variable with itself at different time points. It is often used to analyze time series data, and it can be used to determine whether or not the data is random. Autocorrelation occurs when error terms in a time series carry over from one period to another. In other words, the error for one time period a is correlated with the error for a subsequent time period b.
As a relatively simple example, suppose there is a time series of daily temperatures. The temperature on a specific day would likely correlate with the temperatures on prior days. Plotting a correllogram (ACF) for a series of daily temperatures would likely reveal a non-zero value, as temperatures on consecutive days are often correlated. For example, if today is hot, it is more probable for tomorrow to be hot as well, compared to when today is cold.
Autocorrelation can result in a myriad of problems, including:
- Inefficient Ordinary Least Squares Estimates and any forecast based on those estimates. An efficient estimator gives you the most information about a sample; inefficient estimators can perform well, but require much larger sample sizes to do so. When error terms are serially correlated, the Gauss–Markov theorem assumptions are violated, meaning OLS is no longer the Best Linear Unbiased Estimator (BLUE). Your parameter estimates can still be unbiased, but the standard errors become unreliable. This leads to incorrect inferences (t‑statistics, confidence intervals, etc.).
- Exaggerated goodness of fit (for a time series with positive serial correlation and an independent variable that grows over time).
- Standard errors that are too small (for a time series with positive serial correlation and an independent variable that grows over time).
- T-statistics that are too large.
- False positives for significant regression coefficients. In other words, a regression coefficient appears to be statistically significant when it is not.
Types of Autocorrelation
The most common form of autocorrelation is first-order serial correlation, which can either be positive or negative.
- Positive serial correlation is where a positive error in one period carries over into a positive error for the following period.
- Negative serial correlation is where a negative error in one period carries over into a negative error for the following period.
Second-order serial correlation is where an error affects data two time periods later. This can happen when your data has seasonality. Orders higher than second-order do happen, but they are not necessarily rare; it depends on the nature of the data. In seasonal data (e.g., monthly tourism data), higher-order effects are quite common.
Tests for autocorrelation
Various methods can be used to test for autocorrelation, including:
- Residual plot: Create a plot of residuals against time (t) and look for clusters of consecutive residuals on one side of the zero line. You can also try adding a Lowess line, as shown in the image below.
- Durbin-Watson test. Available in a wide variety of software packages. Simple to use, but does have its limitations and is considered outdated by some. It does have limitations—especially if there is a lagged dependent variable or higher-order serial correlation. “Outdated” might be a bit strong, but many practitioners do prefer more flexible or comprehensive tests (such as the Ljung–Box test or the Breusch–Godfrey Lagrange Multiplier test) for modern time series work.
- Lagrange Multiplier Test. A statistical hypothesis test used to determine whether a constraint, such as autocorrelation or heteroskedasticity, is present in a regression model.
- Ljung-Box Test. Detects autocorrelation in the residuals of a time series model by examining whether there is significant evidence of non-randomness in lagged correlations. Note that a “test for randomness” can be broader than just checking the autocorrelation structure. In practice, you often look at the entire autocorrelation function (ACF), partial autocorrelation function (PACF), and other diagnostics.
- Correlogram: A pattern in the results may indicate autocorrelation. Any values above zero should be viewed with caution.
- Moran’s I statistic, which is similar to a correlation coefficient. Moran’s I is mostly used for spatial autocorrelation (it measures how similar or dissimilar a variable is across areas/locations). While, in principle, a variant can be adapted for time series, it is far more common to see Moran’s I in spatial or spatiotemporal contexts. For pure time-series data, standard methods such as ACF/PACF plots, Durbin–Watson, Ljung–Box, etc., are more common.

What the results mean
Autocorrelation values can range from -1 to +1:
- +1 autocorrelation indicates a perfect positive correlation, where an increase in one time series corresponds to a proportionate increase in the other time series.
- Conversely, a -1 autocorrelation signifies a perfect negative correlation, meaning that an increase in one time series leads to a proportionate decrease in the other time series.
Autocorrelation examples in real life
Autocorrelation is a valuable tool for understanding trends and patterns in time series data. Here are three real-life examples of how autocorrelation is used:
- Weather Forecasting: Autocorrelation plays a crucial role in weather forecasting, as weather variables such as temperature, precipitation, and wind speed often exhibit patterns over time. By analyzing these patterns, meteorologists can better predict future weather conditions and improve the accuracy of their forecasts.
- Sales and Demand Forecasting: In business, autocorrelation is used to analyze sales data and customer demand. By examining the correlation between past and present sales figures, companies can identify seasonal trends, detect potential issues in inventory management, and optimize their marketing strategies to maximize revenue.
- Finance and Stock Market Analysis: In finance, autocorrelation is used to identify trends in stock prices or market indices. By analyzing the correlation between past and present values, traders and investors can make more informed decisions about whether to buy or sell stocks, as well as identify potential market anomalies.
Here’s a specific example: A stock market analyst may use autocorrelation to detect patterns in stock prices. For instance, if a stock’s ACF peaks at lag 1, it implies that today’s stock price is correlated with yesterday’s price. This could suggest a trend, prompting the analyst to consider buying or selling the stock accordingly.
Another application of autocorrelation for stock market analysts is identifying trading opportunities. If a stock’s ACF peaks at lag 2, it means that today’s stock price correlates with the price from two days ago, possibly indicating a momentum pattern. In this case, the analyst might consider buying the stock after a decline and selling it after an increase.
While autocorrelation isn’t a foolproof method for predicting stock prices, it can be helpful in recognizing patterns and informing trading decisions.
Other ways autocorrelation can be applied to stock market analysis include:
- Identifying overbought and oversold conditions: High ACF values may indicate overbought market conditions and potential corrections, while low ACF values may suggest oversold conditions and possible rallies.
- Detecting trading ranges: Sustained high ACF values can signal that the market is within a trading range, which can help traders avoid making trades outside of this range.
- Recognizing trend reversals: A sudden change in ACF direction could imply an impending trend reversal, which can be beneficial for traders looking to capitalize on early trend reversals.
That said, one should be cautious in concluding real “trading opportunities” from simple autocorrelation alone. Markets are often considered efficient (at least in the semi-strong form), so predictable patterns can be arbitraged away quickly. Nonetheless, autocorrelation analysis can provide insights, especially over short horizons or in less-efficient markets.
References