Common Core Standard For English Language Arts Page 7

ADVERTISEMENT

Common Core State StandardS for engliSh language artS & literaCy in hiStory/SoCial StudieS, SCienCe, and teChniCal SubjeCtS
Quantitative Measures of Text Complexity
A number of quantitative tools exist to help educators assess aspects of text complexity that are better measured
by algorithm than by a human reader. The discussion is not exhaustive, nor is it intended as an endorsement of one
method or program over another. Indeed, because of the limits of each of the tools, new or improved ones are needed
quickly if text complexity is to be used effectively in the classroom and curriculum.
Numerous formulas exist for measuring the readability of various types of texts. Such formulas, including the widely
used Flesch-Kincaid Grade Level test, typically use word length and sentence length as proxies for semantic and
syntactic complexity, respectively (roughly, the complexity of the meaning and sentence structure). The assump-
tion behind these formulas is that longer words and longer sentences are more difficult to read than shorter ones; a
text with many long words and/or sentences is thus rated by these formulas as harder to read than a text with many
short words and/or sentences would be. Some formulas, such as the Dale-Chall Readability Formula, substitute word
frequency for word length as a factor, the assumption here being that less familiar words are harder to comprehend
than familiar words. The higher the proportion of less familiar words in a text, the theory goes, the harder that text is
to read. While these readability formulas are easy to use and readily available—some are even built into various word
processing applications—their chief weakness is that longer words, less familiar words, and longer sentences are not
inherently hard to read. In fact, series of short, choppy sentences can pose problems for readers precisely because
these sentences lack the cohesive devices, such as transition words and phrases, that help establish logical links
among ideas and thereby reduce the inference load on readers.
Like Dale-Chall, the Lexile Framework for Reading, developed by MetaMetrics, Inc., uses word frequency and sentence
length to produce a single measure, called a Lexile, of a text’s complexity. The most important difference between the
Lexile system and traditional readability formulas is that traditional formulas only assign a score to texts, whereas the
Lexile Framework can place both readers and texts on the same scale. Certain reading assessments yield Lexile scores
based on student performance on the instrument; some reading programs then use these scores to assign texts to
students. Because it too relies on word familiarity and sentence length as proxies for semantic and syntactic complex-
ity, the Lexile Framework, like traditional formulas, may underestimate the difficulty of texts that use simple, familiar
language to convey sophisticated ideas, as is true of much high-quality fiction written for adults and appropriate for
older students. For this reason and others, it is possible that factors other than word familiarity and sentence length
contribute to text difficulty. In response to such concerns, MetaMetrics has indicated that it will release the qualita-
tive ratings it assigns to some of the texts it rates and will actively seek to determine whether one or more additional
factors can and should be added to its quantitative measure. Other readability formulas also exist, such as the ATOS
formula associated with the Accelerated Reader program developed by Renaissance Learning. ATOS uses word dif-
ficulty (estimated grade level), word length, sentence length, and text length (measured in words) as its factors. Like
the Lexile Framework, ATOS puts students and texts on the same scale.
A nonprofit service operated at the University of Memphis, Coh-Metrix attempts to account for factors in addition to
those measured by readability formulas. The Coh-Metrix system focuses on the cohesiveness of a text—basically, how
tightly the text holds together. A high-cohesion text does a good deal of the work for the reader by signaling relation-
ships among words, sentences, and ideas using repetition, concrete language, and the like; a low-cohesion text, by
contrast, requires the reader him- or herself to make many of the connections needed to comprehend the text. High-
cohesion texts are not necessarily “better” than low-cohesion texts, but they are easier to read.
The standard Coh-Metrix report includes information on more than sixty indices related to text cohesion, so it can be
daunting to the layperson or even to a professional educator unfamiliar with the indices. Coh-Metrix staff have worked
to isolate the most revealing, informative factors from among the many they consider, but these “key factors” are not
yet widely available to the public, nor have the results they yield been calibrated to the Standards’ text complexity
grade bands. The greatest value of these factors may well be the promise they offer of more advanced and usable
tools yet to come.
Reader and Task Considerations
The use of qualitative and quantitative measures to assess text complexity is balanced in the Standards’ model by the
expectation that educators will employ professional judgment to match texts to particular students and tasks. Numer-
ous considerations go into such matching. For example, harder texts may be appropriate for highly knowledgeable or
skilled readers, and easier texts may be suitable as an expedient for building struggling readers’ knowledge or reading
skill up to the level required by the Standards. Highly motivated readers are often willing to put in the extra effort re-
quired to read harder texts that tell a story or contain information in which they are deeply interested. Complex tasks
may require the kind of information contained only in similarly complex texts.
Numerous factors associated with the individual reader are relevant when determining whether a given text is ap-
propriate for him or her. The RAND Reading Study Group identified many such factors in the 2002 report Reading for
Understanding:
The reader brings to the act of reading his or her cognitive capabilities (attention, memory, critical analytic
ability, inferencing, visualization); motivation (a purpose for reading, interest in the content, self-efficacy as
a reader); knowledge (vocabulary and topic knowledge, linguistic and discourse knowledge, knowledge of

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education