Statistics Definitions > Jarque-Bera Test
What is the Jarque-Bera Test?
The Jarque-Bera Test,a type of Lagrange multiplier test, is a test for normality. Normality is one of the assumptions for many statistical tests, like the t test or F test; the Jarque-Bera test is usually run before one of these tests to confirm normality. It is usually used for large data sets, because other normality tests are not reliable when n is large (for example, Shapiro-Wilk isn’t reliable with n more than 2,000).
Specifically, the test matches the skewness and kurtosis of data to see if it matches a normal distribution. The data could take many forms, including:
- Time Series Data.
- Errors in a regression model.
- Data in a Vector.
A normal distribution has a skew of zero (i.e. it’s perfectly symmetrical around the mean) and a kurtosis of three; kurtosis tells you how much data is in the tails and gives you an idea about how “peaked” the distribution is. It’s not necessary to know the mean or the standard deviation for the data in order to run the test.
Running the Test
The formula for the Jarque-Bera test statistic (usually shortened to just JB test statistic) is:
JB = n [(√b1)2 / 6 + (b2 – 3)2 / 24].
the
Where:
n is the sample size,
√b1 is the sample skewness coefficient,
b2 is the kurtosis coefficient.
The null hypothesis for the test is that the data is normally distributed; the alternate hypothesis is that the data does not come from a normal distribution.
What the Results Mean
In general, a large J-B value indicates that errors are not normally distributed.
For example, in MATLAB, a result of 1 means that the null hypothesis has been rejected at the 5% significance level. In other words, the data does not come from a normal distribution. A value of 0 indicates the data is normally distributed.
Unfortunately, most statistical software does not support this test. In order to interpret results, you may need to do a little comparison (and so you should be intimately familiar with hypothesis testing). Checking p-values is always a good idea. For example, a tiny p-value and a large chi-square value from this test means that you can reject the null hypothesis that the data is normally distributed.