List of readability tests
This is a list of formulas which predict textual difficulty.
Overview
changeThese are ways of predicting how hard a piece of writing will be to understand (its textual difficulty). Research has shown that two main factors affect the ease with which texts are read.[1]
- How difficult the words are: this is lexical difficulty. Rare words are less well known than common words. Rare, difficult words are often longer than common, easy words.
- How difficult the sentences are: this is syntactical difficulty. Long, complicated sentences cause more difficulty than short, simple sentences.
Formulae for predicting how difficult a sample of prose will be for readers are called "readability formulae". Some measure only the difficulty of the vocabulary: they are one-variable measures. Others include a measure of syntax such as sentence length.
Validity of the formulae
changeValidity of formulae can be judged by comparing them to each other, which is a kind of consistency check. More important is a check for how well they predict an independent ("outside") criterion of readability.
One way to do this is to use a set of graded test passages. There the correlation coefficient of the better formulae "hovers around 70%".[1]p113 There have been many dozens of experimental tests, summarised by Klare.[1]p121–156 Correlations between readability measures and comprehension scores on passages are usual, as are correlations between readability scores are grade levels as chosen by experienced teachers. Quite an important result was got by Murphy, who increased the readability of a farm journal and found the readership much increased.[2][3]
One-variable formulae
changeSMOG
changeThe SMOG formula uses one variable to predict the difficulty of a passage of prose. It was developed G. Harry McLaughlin in 1969 to make calculations as simple as possible. Like Gunning-Fog the formula uses words which have 3 or more syllables as an indicator for hardness; these words are said to be polysyllabic.
The original formula was given for samples of 30 sentences. It is:
This can be adjusted to work with any number of sentences:
McLaughlin also created directions for an approximate version which can be done with just mental calculation.
- Count the number of words with 3 or more syllables, excluding names, in a set of 30 sentences
- Take the square root of the nearest perfect square
- Add 3 to get the estimated SMOG grade
Two-variable formulae
changeThe Dale–Chall formula
changeEdgar Dale, a professor of education at Ohio State University, was one of the first critics of Thorndike's vocabulary-frequency lists. He claimed that they did not distinguish between the different meanings that many words have. He created two new lists of his own. One, his "short list" of 769 easy words, was used by Irving Lorge in his formula. The other was his "long list" of 3,000 easy words, which were understood by 80 percent of fourth-grade students. In 1948, he incorporated this list in a formula which he developed with Jeanne S. Chall, who was to become the founder of the Harvard Reading Laboratory.
To apply the formula:
- Select several 100-word samples throughout the text.
- Compute the average sentence length in words (divide the number of words by the number of sentences).
- Compute the percentage of words NOT on the Dale–Chall word list of 3,000 easy words.
- Compute this equation
Raw Score = 0.1579PDW + 0.0496ASL + 3.6365
Where:
- Raw Score = uncorrected reading grade of a student who can answer one-half of the test questions on a passage.
- PDW = Percentage of Difficult Words not on the Dale–Chall word list.
- ASL = Average Sentence Length
Finally, to compensate for the "grade-equivalent curve," apply the following chart for the Final Score:
Raw Score | Final Score |
---|---|
4.9 and below | Grade 4 and below |
5.0 to 5.9 | Grades 5–6 |
6.0 to 6.9 | Grades 7–8 |
7.0 to 7.9 | Grades 9–10 |
8.0 to 8.9 | Grades 11–12 |
9.0 to 9.9 | Grades 13–15 (college) |
10 and above | Grades 16 and above[4] |
In 1995, Dale and Chall published a new version of their formula with an upgraded word list.[5]
Flesch reading ease score
changeThe formula for the Flesch reading-ease score is
Scores can be interpreted as shown in the table below.
Score | Notes |
---|---|
90.0–100.0 | easily understood by an average 11-year-old student |
60.0–70.0 | easily understood by 13- to 15-year-old students |
0.0–30.0 | best understood by university graduates |
The US Department of Defense uses the reading ease test as the standard test of readability for its documents and forms.[7] Florida requires that life insurance policies have a Flesch reading ease score of 45 or greater.[8]
Use of this scale is so ubiquitous that it is bundled with popular word processing programs and services such as KWord, IBM Lotus Symphony, Microsoft Office Word, WordPerfect, and WordPro.
Gunning Fog
changeThe Gunning Fog, sometimes called the Fog index, is a formula developed by Robert Gunning. It was first published in his book The Technique of Clear Writing in 1952. It became popular because the score is easy to calculate.
The formula has been criticized as it mainly uses sentence length. The critics argue that texts created with the formula will use shorter sentences without using simpler words. However, this criticism confuses prediction of difficulty with production of prose (writing). The role of readability tests is to predict difficulty; writing better prose is quite another matter. As discussed in prose difficulty, sentence length is an index of syntactical difficulty.[1]
Where:
- words is number of words
- sentences is number of sentences
- hard words is the number of word with 3 or more syllables (excluding endings) which are not names or compound words
Spache
changeThe Spache method compares words in a text to a list of words which are familiar in everyday writing. The words that are not on the list are called unfamiliar. The number of words per sentence are counted. This number and the percentage of unfamiliar words is put into a formula. The result is a reading age. Someone of this age should be able to read the text. It is designed to work on texts for children in primary education or grades from 1st to 7th.
In 1974 Spache revised his Formula to:
Coleman-Liau Index
changeThe calculations are performed in two steps. The first step finds the Estimated Close Percentage. The second step calculates the actual grade.
A simple version also exists that is not as accurate:
Automated Readability Index
changeThe Automated Readability Index was designed for real-time computing of readability for the electric typewriter.[9]
References
change- ↑ 1.0 1.1 1.2 1.3 Klare G. 1963. The measurement of readability. Ames, Iowa:Iowa State University Press.
- ↑ Murphy D.R. 1947. Tests prove short words and sentences get best readership. Printer's Ink 218: 61–64.
- ↑ Murphy D.R. 1947. How plain talk increases readership 45 to 66 per cent. Printer's Ink 220: 35–37.
- ↑ Dale, E. and J. S. Chall. 1948. '"A formula for predicting readability". Educational research bulletin Jan.21 and Feb 17, 27:1–20, 37–54.
- ↑ Chall J.S. & E. Dale. 1995. Readability revisited: The new Dale–Chall readability formula. Cambridge, MA: Brookline Books.
- ↑ Flesch [1] Archived 2016-07-12 at the Wayback Machine
- ↑ Luo Si; et al. (2001). A statistical model for scientific readability. Atlanta, GA, USA: CIKM '01.
- ↑ ""Readable Language in Insurance Policies"". Archived from the original on 2010-10-03. Retrieved 2015-09-12.
- ↑ Smith, E. A.; Senter, R. J. (1967). "Automated Readability Index". Amrl-Tr. Aerospace Medical Research Laboratories (U.s.). Wright-Patterson Air Force Base: 1–14. PMID 5302480. AMRL-TR-6620. Archived from the original on 2013-07-09. Retrieved 2012-03-18.
Further reading
change- Dubay W.H. (2004). "The principles of readability" (PDF). Costa Mesa, CA: Impact Information. Archived from the original (PDF) on 2007-11-27. Retrieved 2008-01-10.
- Spache G. (1953). "A new readability formula for primary-grade reading materials". The Elementary School Journal. 53 (7): 410–413. doi:10.1086/458513. JSTOR 998915. S2CID 145135468.
- Coleman, Meri; Liau, T. L. (1975). "A computer readability formula designed for machine scoring". Journal of Applied Psychology. 60 (2): 283–284. doi:10.1037/h0076540.