Statistics uses variables to describe a measurement. Such a variable is called significant if the probability that its outcome was obtained by chance is less than a given value. Statistical hypothesis tests are used to check significance.
The concept of statistical significance was originated by Ronald Fisher when he developed statistical hypothesis testing, which he described as "tests of significance", in his 1925 publication, Statistical Methods for Research Workers. Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis. In their 1933 paper, Jerzy Neyman and Egon Pearson recommended that the significance level (e.g. 0.05), which they called α, be set ahead of time, before any data collection.
Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed, and in his 1956 publication Statistical methods and scientific inference he recommended that significant levels be set according to specific circumstances.
- Cumming, Geoff (2012). Understanding the new statistics: effect sizes, confidence intervals, and meta-analysis. New York, USA: Routledge. pp. 27–28.
- Poletiek, Fenna H. (2001). "Formal theories of testing". Hypothesis-testing behaviour. Essays in Cognitive Psychology. East Sussex, United Kingdom: Psychology Press. pp. 29–48. ISBN 1-841-69159-3.
- Fisher, Ronald A. (1925). Statistical methods for research workers. Edinburgh, UK: Oliver and Boyd. p. 43. ISBN 0-050-02170-2.
- Quinn, Geoffrey R.; Keough, Michael J. (2002). Experimental design and data analysis for biologists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 46–69. ISBN 0-521-00976-6.
- Neyman, J.; Pearson, E.S. (1933). "The testing of statistical hypotheses in relation to probabilities a priori". Mathematical Proceedings of the Cambridge Philosophical Society. 29: 492–510. doi:10.1017/S030500410001152X.