Probability and Statistics > Regression analysis
- Introduction to Regression Analysis
- Multiple Regression Analysis
- Overfitting and how to avoid it
- More articles
In statistics, it’s hard to stare at a set of random numbers in a table and try to make any sense of it. For example, global warming may be reducing average snowfall in your town and you are asked to predict how much snow you think will fall this year. Looking at the following table you might guess somewhere around 10-20 inches. That’s a good guess, but you could make a better guess, by using regression.
Essentially, regression is the “best guess” at using a set of data to make some kind of prediction. It’s fitting a set of points to a graph. There’s a whole host of tools that can run regression for you, including Excel, which I used here to help make sense of that snowfall data:
Just by looking at the regression line running down through the data, you can fine tune your best guess a bit. You can see that the original guess (20 inches or so) was way off. For 2015, it looks like the line will be somewhere between 5 and 10 inches! That might be “good enough”, but regression also gives you a useful equation, which for this chart is:
y = -2.2923x + 4624.4.
What that means is you can plug in an x value (the year) and get a pretty good estimate of snowfall for any year. For example, 2005:
y = -2.2923(2005) + 4624.4 = 28.3385 inches, which is pretty close to the actual figure of 30 inches for that year.
Best of all, you can use the equation to make predictions. For example, how much snow will fall in 2017?
y = 2.2923(2017) + 4624.4 = 0.8 inches.
Regression also gives you an R squared value, which for this graph is 0.702. This number tells you how good your model is. The values range from 0 to 1, with 0 being a terrible model and 1 being a perfect model. As you can probably see, 0.7 is a fairly decent model so you can be fairly confident in your weather prediction!
Multiple regression analysis is almost the same as simple linear regression. The only difference between simple linear regression and multiple regression is in the number of predictors (“x” variables) used in the regression.
- Simple regression analysis uses a single x variable for each dependent “y” variable. For example: (x1, Y1).
- Multiple regression uses multiple “x” variables for each independent variable: (x1)1, (x2)1, (x3)1, Y1).
In one-variable linear regression, you would input one dependent variable (i.e. “sales”) against an independent variable (i.e. “profit”). But you might be interested in how different types of sales effect the regression. You could set your X1 as one type of sales, your X2 as another type of sales and so on.
When to Use Multiple Regression Analysis.
Ordinary linear regression usually isn’t enough to take into account all of the real-life factors that have an effect on an outcome. For example, the following graph plots a single variable (number of doctors) against another variable (life-expectancy of women).
From this graph it might appear there is a relationship between life-expectancy of women and the number of doctors in the population. In fact, that’s probably true and you could say it’s a simple fix: put more doctors into the population to increase life expectancy. But the reality is you would have to look at other factors like the possibility that doctors in rural areas might have less education or experience. Or perhaps they have a lack of access to medical facilities like trauma centers.
The addition of those extra factors would cause you to add additional dependent variables to your regression analysis and create a multiple regression analysis model.
Multiple Regression Analysis Output.
Regression analysis is always performed in software, like Excel or SPSS. The output differs according to how many variables you have but it’s essentially the same type of output you would find in a simple linear regression. There’s just more of it:
- Simple regression: Y = b0 + b1 x.
- Multiple regression: Y = b0 + b1 x1 + b0 + b1 x2…b0…b1 xn.
The output would include a summary, similar to a summary for simple linear regression, that includes:
- R (the multiple correlation coefficient),
- R squared (the coefficient of determination),
- adjusted R-squared,
- The standard error of the estimate.
Minimum Sample size
“The answer to the sample size question appears to depend in part on the objectives
of the researcher, the research questions that are being addressed, and the type of
model being utilized. Although there are several research articles and textbooks giving
recommendations for minimum sample sizes for multiple regression, few agree
on how large is large enough and not many address the prediction side of MLR.” ~ Gregory T. Knofczynski
If you’re concerned with finding accurate values for squared multiple correlation coefficient, minimizing the
shrinkage of the squared multiple correlation coefficient or have another specific goal, Gregory Knofczynski’s paper is a worthwhile read and comes with lots of references for further study. That said, many people just want to run MLS to get a general idea of trends and they don’t need very specific estimates. If that’s the case, you can use a rule of thumb. It’s widely stated in the literature that you should have more than 100 items in your sample. While this is sometimes adequate, you’ll be on the safer side if you have at least 200 observations or better yet—more than 400.
While an overfitted model may fit the idiosyncrasies of your data extremely well, it won’t fit additional test samples or the overall population. The model’s
p-values, R-Squared and regression coefficients can all be misleading. Basically, you’re asking too much from a small set of data.
How to Avoid Overfitting
In linear modeling (including multiple regression), you should have at least 10-15 observations for each term you are trying to estimate. Any less than that, and you run the risk of overfitting your model.
- Interaction Effects,
- Polynomial expressions (for modeling curved lines),
- Predictor variables.
While this rule of thumb is generally accepted, Green (1991) takes this a step further and suggests that the minimum sample size for any regression should be 50, with an additional 8 observations per term. For example, if you have one interacting variable and three predictor variables, you’ll need around 45-60 items in your sample to avoid overfitting, or 50 + 3(8) = 74 items according to Green.
There are exceptions to the “10-15” rule of thumb. They include:
- When there is multicollinearity in your data, or if the effect size is small. If that’s the case, you’ll need to include more terms (although there is, unfortunately, no rule of thumb for how many terms to add!).
- You may be able to get away with as few as 10 observations per predictor if you are using logistic regression or survival models, as long as you don’t have extreme event probabilities, small effect sizes, or predictor variables with truncated ranges. (Peduzzi et al.)
How to Detect and Avoid Overfitting
The easiest way to avoid overfitting is to increase your sample size by collecting more data. If you can’t do that, the second option is to reduce the number of predictors in your model — either by combining or eliminating them. Factor Analysis is one method you can use to identify related predictors that might be candidates for combining.
Use cross validation to detect overfitting: this partitions your data, generalizes your model, and chooses the model which works best. One form of cross-validation is predicted R-squared. Most good statistical software will include this statistic, which is calculated by:
- Removing one observation at a time from your data,
- Estimating the regression equation for each iteration,
- Using the regression equation to predict the removed observation.
Cross validation isn’t a magic cure for small data sets though, and sometimes a clear model isn’t identified even with an adequate sample size.
2. Shrinkage & Resampline
3. Automated Methods
Automated stepwise regression shouldn’t be used as an overfitting solution for small data sets. According to Babyak (2004),
“The problems with automated selection conducted in this very typical manner are so
numerous that it would be hard to catalogue all of them [in a journal article].”
Babyak also recommends avoiding univariate pretesting or screening (a “variation of automated selection in disguise”), dichotomizing continuous variables — which can dramatically increase Type I errors, or multiple testing of confounding variables (although this may be ok if used judiciously).
- Babyak, M.A.,(2004). “What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models.” Psychosomatic Medicine. 2004 May-Jun;66(3):411-21.
- Green S.B., (1991) “How many subjects does it take to do a regression analysis?” Multivariate Behavior Research 26:499–510.
- Peduzzi P.N., et. al (1995). “The importance of events per independent variable in multivariable analysis, II: accuracy and precision of regression estimates.” Journal of Clinical Epidemiology 48:1503–10.
- Peduzzi P.N., et. al (1996). “A simulation study of the number of events per variable in logistic regression analysis.” Journal of Clinical Epidemiology 49:1373–9.
Check out our YouTube channel for hundreds of videos on elementary statistics, including regression analysis using a variety of tools like Excel and the TI-83.
- How to Construct a Scatter Plot.
- How to Calculate Pearson’s Correlation Coefficients.
- How to Compute a Linear Regression Test Value.
- Chow Test for Split Data Sets
- How to Find a Regression Slope Intercept.
- How to Find a Linear Regression Slope.
- How to Find the Standard Error of Regression Slope.
- Validity Coefficient: What it is and how to find it.
- Quadratic Regression.
- Stepwise Regression
- Assumptions and Conditions for Regression.
- Betas / Standardized Coefficients.
- What is a Beta Weight?
- Bilinear Regression
- The Breusch-Pagan-Godfrey Test
- Cook’s Distance.
- What is a Covariate?
- Cox Regression.
- Detrend Data.
- Gauss-Newton Algorithm.
- What is the General Linear Model?
- What is the Generalized Linear Model?
- What is the Hausman Test?
- What is Homoscedasticity?
- Influential Data.
- What is an Instrumental Variable?
- Lasso Regression.
- Levenberg–Marquardt Algorithm
- What is a Linear Relationship?
- What is the Line of best fit?
- What is Logistic Regression?
- What is the Mahalanobis distance?
- Model Misspecification.
- Multinomial Logistic Regression.
- What is Nonlinear Regression?
- Ordered Logit / Ordered Logistic Regression
- What is Ordinary Least Squares Regression?
- Parsimonious Models.
- What is Pearson’s Correlation Coefficient?
- Poisson Regression.
- Probit Model.
- What is a Prediction Interval?
- What is Regularization?
- Regularized Least Squares.
- What are Relative Weights?
- What are Residual Plots?
- Reverse Causality.
- Ridge Regression
- Root Mean Square Error.
- Semiparametric models
- Simultaneity Bias.
- Simultaneous Equations Model.
- What is Spurious Correlation?
- Structural Equations Model
- What are Tolerance Intervals?
- Trend Analysis
- Tuning Parameter
- What is Weighted Least Squares Regression?
- Y Hat explained.
If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.Comments? Need to post a correction? Please post a comment on our Facebook page.