Bodong Chen

Crisscross Landscapes

Notes: Olinghouse2013-yd: The relationship between vocabulary and writing quality in three genres

2017-09-20


References

Citekey: @Olinghouse2013-yd

Olinghouse, N. G., & Wilson, J. (2013). The relationship between vocabulary and writing quality in three genres. Reading and Writing, 26(1), 45–65. https://doi.org/10.1007/s11145-012-9392-5

Notes

Summarize:

Assess:

Reflect:

Highlights

Abstract The purpose of this study was to examine the role of vocabulary in writing across three genres. Fifth graders (N = 105) wrote three compositions: story, persuasive, and informative. Each composition revolved around the topic of outer space to control for background knowledge. Written compositions were scored for holistic writing quality and several different vocabulary constructs: diversity, maturity, elaboration, academic words, content words, and register. The results indicated that students vary their vocabulary usage by genre. Story text had higher diversity than informative text as well as higher maturity as compared to persuasive text. Persuasive text contained higher diversity than informative text, and higher register than both of the other genres. Informative text included more content words and elaboration than the other text types as well as more maturity than persuasive text. Additionally, multiple regression and commonality analysis indicated that the vocabulary constructs related to writing quality differed by genre. For story text, vocabulary diversity was a unique predictor, while for persuasive text, content words and register were unique predictors. Finally, for informative text content words was the strongest unique predictor explaining almost all of the total variance in the five factor model, although maturity was also a unique predictor. (p. 1)

Writing is a complex process, involving the coordination of many high-level cognitive and meta-cognitive skills. Seminal models of the writing process (e.g., (p. 1)

Flower & Hayes, 1981; Hayes, 1996; Scardamalia & Bereiter, 1987) suggest that producing a quality written text requires generating and organizing ideas, goal- setting, planning, drafting, revising, and continuously self-monitoring performance. (p. 2)

However, one area that has received little attention is the relationship between vocabulary and quality within written compositions. (p. 2)

One resource, long-term memory, helps explain how vocabulary may be used in the writing process. Flower and Hayes (1980) discuss long-term memory within their explanation of the translating process (p. 2)

Scardamalia and Bereiter’s (1987) knowledge-telling model—a model of developing writing—expands on this link between vocabulary and long-term memory. (p. 2)

Content knowledge includes specific knowledge related to the topic of the written text. Discourse knowledge includes procedural knowledge (how to compose quality text), and genre knowledge (the important features of different genres). (p. 2)

Students who possess a higher degree of content and discourse knowledge are predicted to compose texts of higher quality than those who lack such knowledge (e.g., Benton, Corkill, Sharp, Downey, & Khramtsova, 1995; DeGroff, 1987; Langer, 1984; McCutchen, 1986; Olinghouse & Graham, 2009). (p. 2)

Vocabulary is implicated in each of these knowledge-spaces. Vocabulary conveys content knowledge because many topics have specialized vocabulary (Harmon, Hedrick, & Wood, 2005). (p. 2)

they express their knowledge in a way which is valued and accepted by the discourse community for that field (e.g., Swales, 1988). (p. 2)

Vocabulary also is implicated in discourse knowledge because it is hypothesized to be a distinguishing feature of different genres of text (Biber, 1988; Halliday & Hasan, 1976). (p. 2)

Vocabulary constructs and measures (p. 3)

Frequently measured vocabulary constructs in writing include: diversity, maturity, content vocabulary, academic vocabulary, and register. (p. 3)

Vocabulary diversity, likely the most common vocabulary construct included in past writing research, refers to the breadth of words used in a text (p. 3)

Vocabulary maturity is a construct describing the sophistication of the vocabulary used in a text. (p. 3)

Content vocabulary, sometimes referred to as ‘‘domain-specific vocabulary’’ (Marzano & Pickering, 2005)or ‘‘technical terms’’ (Harmon, Wood, & Medina, 2009; Rehder et al., 1998), refers to words that are unique to different topics (e.g., weather, Civil War, outer space). (p. 3)

academic vocabulary is comprised of words used in a variety of academic contexts (e.g., synthesize, confirm, deduce) rather than words associated with specific disciplines or topics. (p. 3)

register is a construct which describes the unique features of language which vary according to different situations, purposes, and forms of language use (Biber & Vasquez, 2008; Halliday & Hasan, 1976). Register has been shown to vary according to linguistic mode (oral vs. written; Biber, 2009), and according to text type (Biber & Kurjian, 2007). (p. 3)

Earlier measures simply included the number of different or unique words (Barenbaum, Newcomer, & Nodine, 1987; Grobe, 1981) or a ratio of types to tokens (i.e., number of words written; Johnson, 1944). (p. 4)

corrected type-token ratio (CTTR; Carroll, 1964) attempted to remove the effect of text length (p. 4)

More recent developments in measuring lexical diversity have attempted to overcome this barrier by using complex computer algorithms. For example, the Measure of Textual Lexical Diversity (MTLD; McCarthy & Jarvis, 2010) virtually eliminates text length as a confounding issue. (p. 4)

Vocabulary, genre, and quality (p. 4)

Three commonly assessed genres in writing are narrative, persuasive, and informative text. (p. 4)

In a cross-sectional study of fifth-, eighth-, and eleventh-graders, Grobe (1981) found vocabulary diversity to be a significant predictor of narrative writing quality. (p. 4)

Three measures of vocabulary maturity predicted writing quality in the fifth-grade sample: rank, vocabulary repeat rate, and average word size. (p. 4)

Olinghouse and Leaird (2009) also found that measures of vocabulary diversity were significant predictors of narrative writing quality for second- and fourth- graders. (p. 5)

Purpose of the present study (p. 5)

This is the first study of its kind to explore such relationships in all three of the primary text-types, and to investigate relationships between vocabulary and writing quality in persuasive text. (p. 5)

We selected vocabulary constructs and measures that had the potential to be predictive of writing quality in different genres (p. 5)

Instead of hypotheses, the following research questions (RQ) guided the present study: RQ1: For fifth-grade writers, are measures of vocabulary stable across three written genres? RQ2: For fifth-grade writers, which vocabulary measures predict writing quality? Do the vocabulary predictors of writing quality vary across genres? (p. 5)

Methodology (p. 5)

Participants (p. 6)

Final participants included 107 students from the six classrooms described above. (p. 6)

To compare students to a nationally-normed sample of fifth-grade students, the Spontaneous Writing test of the Test of Written Language-3 (TOWL-3; Hammill & Larsen, 1996; all reliabilities range from .83 to .92 for fifth-grade students) was administered (p. 6)

Procedures (p. 7)

Measures (p. 8)

Overall writing quality (p. 8)

Each rubric evaluated four aspects of writing: (1) Development of Ideas (Elaboration), (2) Organization, (3) Sentences/Word Choice/Voice, and (4) Genre Elements. (p. 8)

Vocabulary (p. 9)

Diversity Vocabulary diversity was calculated using the Measure of Textual Lexical Diversity (MTLD, McCarthy & Jarvis, 2010), included as part of the Gramulator software program (McCarthy, Watanabi, & Lamkin, in press). (p. 9)

Maturity Mature vocabulary was assessed by comparing the vocabulary used in students’ text to the General Service List (GSL; West, 1953). (p. 9)

Content vocabulary In this study, content vocabulary was assessed using a researcher-developed list of vocabulary words highly associated with the topic ‘‘outer space’’. (p. 9)

(a) solar system (e.g., planets, sun, earth.), (b) space exploration and travel (e.g., astronaut, space ship, shuttle), © galaxies/ universes (e.g., black hole, quasar, white dwarf), and (d) astronomy (e.g., telescope, Hubble, astronomer). (p. 10)

Text was collected from 160 websites whose Flesch–Kincaid reading levels fell between Grades 2–9 (average reading level = 5.8) and from 3 comprehensive books on outer space targeted toward upper elementary students. The text was then analyzed using the Concordance software program (Watt, 2000) which identified 119,261 words, of which 8,333 were unique. A word was only included in the content vocabulary list if it had a minimum of 10 occurrences in the corpus (reducing the list to 1,218), and was deemed by three RAs to be essential to outer space content and unlikely to be found in other unrelated content areas. This produced a list of 159 words. (p. 10)

Finally, the content vocabulary list was compared to a general corpus to ensure that the words on the list occurred significantly more often in the outer space corpus than in a corpus of general text (Educator’s Word Frequency Guide; Zeno, Ivens, Millard, & Duvvuri, 1995). (p. 10)

Elaboration Elaboration was assessed by determining the average number of modifiers per noun phrase in a text using CohMetrix (McNamara, Louwerse, Cai, & Graesser, 2005). (p. 10)

Register This study used the proportion of words in a text of Latinate origin as compared to Germanic origin (Bar-Ilan & Berman, 2007) as a measure of register. (p. 10)

Academic words The Academic Word List (Coxhead, 2000) includes 570 word families and approximately 3,000 total words that frequently occur in many types of academic text. (p. 10)

Results (p. 11)

Though academic vocabulary was measured in the study, means (.01) revealed that the fifth-graders in this sample included a very small percentage of such vocabulary in their compositions. (p. 11)

Narrative text had higher diversity than informative text as well as higher maturity as compared to persuasive text. Persuasive text contained higher diversity than informative text, and higher register than both of the other genres. Informative text included more content words and elaboration than the other text types as well as more maturity than persuasive text. (p. 12)

Research question 2: Which vocabulary measures predict writing quality? (p. 12)

Bivariate correlations among the vocabulary measures and writing quality for each genre are presented in Tables 3 and 4. (p. 12)

The regression model predicting story writing quality was significant. Vocab- ulary diversity and maturity explained approximately 9 % of the variance in narrative quality, although only vocabulary diversity was a significant unique predictor. (p. 12)

The model predicting persuasive writing quality also was significant. Vocabulary diversity, the inclusion of content words, and the use of a higher linguistic register explained approximately 14 % of the variance in persuasive writing quality. (p. 13)

The model predicting informative writing quality was significant. The overall model explained almost half of the variance (46 %) in informative writing quality. After controlling for the other predictors in the model, maturity and content words were significant unique predictors. Content words was the strongest unique predictor, explaining 31 % of the total variance. Maturity was the only other variable that contributed unique prediction, 3.4 % (p. 14)

Discussion (p. 15)

While not directly testing causal mechanisms within the Knowledge Telling Model (Scardamalia & Bereiter, 1987), the results support the premise that genre knowledge and topic knowledge are knowledge bases students access to select appropriate words while composing written text. (p. 15)

Despite an emphasis on academic vocabulary in late elementary and secondary settings (e.g., Lawrence, White, & Snow, 2010) and the inclusion of academic vocabulary in CCSS Language standards (Common Core State Standards Initiative, 2010): ‘‘Acquire and use accurately grade-appropriate general academic and domain-specific words and phrases …’’ (p. 29), fifth-grade students in this study included very few academic words in their written compositions, regardless of the genre. On average, approximately 1 % of words were academic words, translating to around 1.5 academic words in each written composition. In comparison, published text for fifth-grade students included 0.8 % academic words in narrative text, and 2.7 % academic words in expository text (Gardner, 2004). (p. 16)

Nagy and Scott (2000) and Scott, Lubliner, and Hiebert (2006) provide an overview of the common vocabulary constructs underlying word selection and assessment tasks typically found in word learning instruction. Nagy and Scott identified five constructs: incrementality, multidimensionality, polysemy, interrelatedness, and heterogeneity. Scott et al. added additional constructs such as breadth, accuracy, automaticity, and semantic decision making. (p. 18)

A considerable lack of overlap occurs between word learning research and writing research. An argument can be made that breadth (Scott et al., 2006; general vocabulary knowledge), and diversity (index of number of unique words in writing) share some commonalities; however, other constructs between the two fields have few similarities. (p. 18)