# Statistical regularity, conditions

< Probability and statistics definitions < Statistical regularity

## What is statistical regularity?

Statistical regularity is the idea that if you repeat a random event a large number of times, the results will tend to be closer to the average value. It is an umbrella term that includes the law of large numbers, central limit theorems, and ergodic theorems (a branch of mathematics that studies statistical properties of deterministic dynamical systems). The observation of this phenomenon initially motivated the development of what is now called frequency probability.

This phenomenon should not be confused with the gambler’s fallacy. The gambler’s fallacy is a cognitive bias that leads people to believe that the outcome of a random event is influenced by previous outcome; statistical regularity considers the overall behavior over the long run, rather than focusing on individual cases.

## Examples of statistical regularity

Statistical regularity has applications in games of chance, demographic statistics, fraud prevention, quality control in manufacturing processes, weather forecasting and stock market predictions. This is because there is a huge amount of structural regularity in natural processes, and such regularities give us an opportunity to form compressed, efficient representations for these phenomena [1]. For example, natural images have a great deal of regularity in contrast and intensity distributions [2, 3], chromatic structure [4 – 7], reflectance spectra [8, 9], and spatial structure [10, 11, 12 – 14].

A good way to experience statistical regularity is to repeatedly play a game of chance [15], such as rolling a die. Consider what happens when a die is rolled once. It is difficult to predict the outcome. However, when this experiment is repeated numerous times, the frequency of each result, divided by the number of throws, eventually stabilizes around a specific “average” value. Repeating a series of trials will yield similar results for each series, with the mean, standard deviation, and other distributional characteristics tending to be consistent across the trials.

Weather forecasting relies on statistical regularity to predict the weather. For instance, when a weather forecaster states a 60% chance of rain tomorrow, it means that historical data indicates a 60% probability of rainfall. Similarly, stock market predictions use statistical regularity to forecast market trends. For example, if a stock analyst predicts a 70% chance of a specific stock increasing in value next week, this is based on historical data indicating a 70% probability of its value rising. In addition, statistical regularity plays a vital role in fraud detection. For instance, banks apply statistical regularity to detect potentially fraudulent transactions.

## Regularity conditions

Regularity conditions are the assumptions needed to ensure the validity of a statistical test. These conditions guarantee an approximation of a distribution for a certain test statistic (for example, the chi-square statistic), enabling the use of the specified distribution to calculate the p-value.

Regularity conditions are based on statistical regularity. They verify the presence of statistical regularity in the sample data, thus ensuring the validity of the test. Without satisfying regularity conditions, the test statistic may deviate from a particular distribution, leading to incorrect conclusions regarding the hypothesis being tested.

## References

1. Alvarez, G. & Oliva, A. Spatial ensemble statistics are efficient codes that can be represented with reduced attention.
2. Brady N, Field DJ (2000) Local contrast in natural images: Normalisation and coding efficiency. Perception 29:1041–1055.
3. Frazor RA, GeislerWS (2006) Local luminance and contrast in natural images. Vision Res46:1585–1598.
4. Webster MA, Mollon JD (1997) Adaptation and the color statistics of natural images. Vision Res 37:3283–3298.
5. Hyvarinen A, Hoyer PO (2000) Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Comput 12:1705–1720.
6. Judd DB, MacAdam DL, Wyszecki GW (1964) Spectral distribution of typical daylight as a function of correlated color temperature. J Opt Soc Am A 54:1031–1040.
7. Long F, Yang Z, Purves D (2006) Spectral statistics in natural scenes predict hue, saturation, and brightness. Proc Natl Acad Sci USA 103:6013– 6018.
8. Maloney LT (1986) Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J Opt Soc Am A 3:1673–1683.
9. Maloney LT, Wandell BA (1986) Color constancy: A method for recovering surface spectral reflectance. J Opt Soc Am A 3:29 –33
10. Field DJ (1987) Relations between the statistics of natural images and the response properties of cortical cells. J Opt Soc Am A 4:2379 –2394
11. Field DJ (1989) What the statistics of natural images tell us about visual coding. SPIE: Human Vision, Visual Processing, Digital Display 1077:269 –276
12. Burton GJ, Moorehead IR (1987) Color and spatial structure in natural scenes. Appl Opt 26:157–170.
13. Geisler WS, Perry JS, Super BJ, Gallogly DP (2001) Edge co-occurrence in natural images predicts contour grouping performance. Vision Res 41:711–724.
14. Torralba A, Oliva A (2003) Statistics of natural image categories. Network 14:391– 412
15. Whitt, W. (2002) Stochastic-Process Limits. Springer-Verlag

Comments? Need to post a correction? Please Contact Us.