What do you believe is the relationship between reading writing and literacy?

Introduction

Language use is often categorized into one of four modalities: speaking, listening, reading or writing, and we like to think and talk about these four modalities in terms of four different abilities. It is commonly acknowledged that these four abilities are interrelated, and a large number of studies have investigated these relationships, especially the relationship between the two literacy skills, reading and writing, has been studied extensively (Berninger, Abbott, Abbott, Graham, & Richards, 2002; Fitzgerald & Shanahan, 2000; Shanahan, 2006). In these studies, the core issue has been whether reading or reading development has an influence on writing or writing development, or vice versa, or whether there are bidirectional influences (Abbott, Berninger, & Fayol, 2010; Ahmed, Wagner, & Lopez, 2014; Shanahan & Lomax, 1988). This issue is particularly relevant for the design of literacy education. However, taking a more theoretical perspective, it is most likely that both reading and writing use partially the same cognitive, linguistic and discourse resources a language user has at his or her disposal. When we think of models of reading and writing, we can expect the same building blocks or constituent components to play a role in the cognitive processes of reading and writing. We can assume that the individual differences in reading and writing are caused by individual differences in these constituent processes (see Borsboom, Mellenbergh, & Van Heerden, 2004; Schoonen, 2011), and therefore that the correlation between reading and writing, at least in part, may be caused by individual differences in these resources.

In this article, we investigate the relationship between reading and writing ability from the perspective of underlying cognitive processing and language subskills.Footnote 1 Which of the cognitive processes or language subskills are dominant in reading and writing may differ depending on the status of language, being the language user’s first language (L1) or a foreign language (FL). Furthermore, we expect that the build-up of reading and writing, in terms of these components, changes as the language user becomes more proficient in reading and writing.

To summarize, we will investigate the extent to which the relationship between reading and writing ability can be explained by language resources they both appeal to, and we will do so for reading and writing in L1 and EFL, and at three stages of development, respectively.

Reading and writing resources

In search of potentially overlapping building blocks of reading and writing, we can compare cognitive models of reading to similar models of writing. However, these models come from different research traditions and are seldom formulated in terms of required subskills. Still, we could try to derive relevant subskills from these processing models (Schoonen et al., 2003; Van Gelderen et al., 2004).

A model of reading

Of the reading models presented in the literature (Joshi & Aaron, 2000; Perfetti, Landi, & Oakhill, 2005; Vellutino, Tunmer, Jaccard, & Chen, 2007). Perfetti’s model is one of the more comprehensive ones (Perfetti et al., 2005). This model describes the cognitive steps that need to be taken to achieve reading comprehension, starting from analyzing the visual input of text to, ultimately, making inferences and updating of the situation model (Perfetti et al., 2005; see for an assessment perspective on reading competency: O’Reilly & Sheenan, 2009). From the visual input words need to be identified by means of orthographic and phonological processing, and at next levels meanings and forms need to be selected, the string of words needs to be parsed to establish a text representation, and, finally, a situation model. Apart from the cognitive processes concerned, the model also includes the language user’s knowledge resources that will be involved. Major knowledge sources involved are orthographic knowledge, the mental lexicon, the ‘linguistic system’ including morpho-syntactic knowledge, and general encyclopedic knowledge. The model does not include discourse knowledge as an explicit, separate knowledge resource, but we can consider it to be part of the ‘general’ knowledge that is presumed. Or, as the authors state, “[t]hese representations are not the result of exclusively linguistic processes, but are critically enhanced by other knowledge resources.” (Perfetti et al., 2005, p. 229). These ‘other’ knowledge resources could be discourse or genre knowledge, but also more strategic knowledge about how to achieve certain reading goals or how to overcome reading problems (O’Reilly & Sheehan, 2009). We will refer to this type of knowledge, as metacognitive knowledge of text characteristics and reading strategies. We can expect that language users with good orthographic, vocabulary, morpho-syntactic and metacognitive knowledge will be good readers. A lack of this kind of language knowledge may hinder language users in performing a reading task successfully. The language user not only needs to have the aforementioned linguistic and non-linguistic knowledge available, but this knowledge has to be easily accessible. Especially so-called lower order processing, such as decoding, word recognition and syntactic parsing, needs to be fluent, if not cognitively automatized. Fluent word recognizers tend to be good readers, particularly at the early stages of reading development (Lai, Benjamin, Schwanenflugel, & Kuhn, 2014; Perfetti, 1985; Stanovich, 1991), and fast processing at the sentence level is also related to good reading comprehension (Klauda & Guthrie, 2008; Van Gelderen et al., 2004). The relationship between knowledge resources and processing fluency on the one hand and reading ability on the other is not as straightforward and linear as suggested. The cognitive processes are interdependent: faster word recognition may no longer affect the overall reading performance, if other processes cannot keep up with the fast word recognition, and very advanced metacognitive knowledge may no longer pay off when reading simple texts.

To summarize, the aforementioned knowledge resources and cognitive processes may affect one’s reading performance. However, the extent to which this happens is still subject of investigation (Joshi & Aaron, 2000; Shiotsu & Weir, 2007; Van Gelderen et al., 2004). The components and features of the reading comprehension process are all potential sources of overlap between reading and writing proficiency, provided that they also play a role in the writing process. Therefore, these building blocks need to be matched with similar components and features that are involved in the writing process.

A model of writing

Since Emig (1971), writing processes have been investigated in many studies (Breetvelt, Van den Bergh, & Rijlaarsdam, 1994; Flower & Hayes, 1981; Zamel, 1983). Nevertheless, comprehensive writing process models seem to be less available (see for an overview Deane et al., 2008; MacArthur & Graham, 2015). The classic Flower and Hayes model (1981) is not very explicit about the linguistic and non-linguistic resources required for successful writing. As Tillema (2012) states, their models “describe the constituent parts of writing, but make no claims about, for example, which knowledge from long-term memory is, or should be, used during the writing process, or how writing processes should be organized” (pp. 3–4). The model includes a long-term memory component, but it is limited to knowledge of topic, audience and writing plans. Bereiter and Scardamalia (1987) describe two modes of writing (i.e., knowledge telling and knowledge transforming) that writers may apply dependent on their writing experience and the difficulty of the task that has to be fulfilled, but these descriptions do not provide us with an inventory of writing building blocks or subskills that could be useful for our purposes.

When we picture the writing process, loosely following the Flower-Hayes model and its updates (Hayes, 1996, 2012), it is obvious that in the context of writing and writing assignments, writers have to read as well. Often they have to write in response to written materials that they have to read, and they have to create a mental model of the writing task (Nicolás-Conesa, 2012) which in itself may require careful reading of instructions or source materials. Also the process of revision plays a prominent role in writing models (Flower and Hayes, 1981; Hayes, 1996). Revision implies—among other things—reading the text written so far (Hayes, Flower, Schriver, Stratman, & Carey, 1987). Reading for evaluation appeals to the linguistic and non-linguistic resources we mentioned in the previous section, which is also confirmed by the reading for evaluation model described in Hayes et al. (1987, p. 205), ranging from word decoding and sentence interpretation to applying semantic and schemata knowledge. Therefore, reading per se can be considered part of the overall writing process. However, also the productive processes in writing appeal to linguistic and non-linguistic resources, although the directionality of the information flow is different, going from meaning to language, instead of from language to meaning, as we saw in reading processes.

The seeming lack of attention to linguistic resources required for writing might be due to the assumption that these resources are available to most native speakers and that they are not writers’ biggest problem in performing a writing task. However, Grabe and Kaplan (1996) working from an applied linguistic and foreign language (FL) teaching perspective pay more explicit attention to this aspect of writing. They consider language ability an important aspect of (FL) writing ability, and provide an extensive taxonomy of (academic) writing subskills, knowledge bases, processes and writing contexts. Linguistic, sociolinguistic and discourse knowledge are recognized components, as are metacognitive knowledge components such as knowledge about audience and writing strategies (Grabe & Kaplan, 1996, p. 217). Linguistic knowledge includes orthographic, punctuation and formatting knowledge, vocabulary knowledge and morpho-syntactic knowledge. Kellogg (1994) takes a somewhat broader perspective and considers three types of knowledge crucial to writing: sociocultural knowledge, conceptual knowledge (including knowledge of the world and knowledge of the language) and metacognitive knowledge (about the self, tasks, strategies, plans and goals). Knowledge of the language comprises discourse, lexical, syntactical and text structural knowledge (Kellogg, 1994, pp. 71–79). More recently, Deane et al. (2008) developed a writing competency model for assessment purposes. Based on an extensive literature review, they distinguish three so-called strands: one is about literacy and language skills, one about strategic writing process management, and the third is about critical thinking and reasoning. The authors stress that in the second and third strand reading and rereading play a prominent role. The first strand is concerned with the use of vocabulary and written style, control of sentence structure, mechanics and spelling (Deane et al., 2008, p. 69).

As in reading, fluent access to knowledge is important in writing too. Several researchers have argued and shown that a certain level of fluency is beneficial to the writing process and the writing output (McCutchen, 2000). One of the underlying assumptions is that fluency in writing will reduce the burdening of writers’ working memory and thus free up cognitive space for higher order writing processes. Fluency can be achieved by extensive writing-relevant knowledge about topic and genre, and by fluent language generation processes. These latter processes are associated with oral language processes, such as: content generation, lexical retrieval and syntactic processes. In order to put content and language into text, the writer has to transcribe the message, which will require (fluent) spelling and handwriting/typing processes (McCutchen, 2000, p. 15). This analysis of writing suggests a simple view of writing with writing consisting of spelling and ideation (Juel, Griffith, & Gough, 1986), analogous to the simple view of reading, which conceives reading as consisting of (oral) language comprehension and decoding (Hoover & Gough, 1990).

To summarize, both reading and writing models recognize the role of topical knowledge in language processing, in addition to that of linguistic knowledge. The linguistic knowledge includes lexical-semantic knowledge at the word level, morpho-syntactic knowledge at the sentence level and pragmatic-discourse knowledge at the above sentence level. This linguistic knowledge can be expanded with orthographic knowledge to decode script into language or to encode language into script. Furthermore, the language user must know how to approach the task and how to act strategically in performing a language task. This kind of knowledge is in part related to discourse level knowledge (knowledge of text characteristics) and can be viewed as metacognitive knowledge as well, especially the strategic part. The knowledge sources can be conceived of as declarative knowledge. However, reading and writing both require the language user to have some fluency in accessing theses knowledge resources, especially the lower order knowledge at the orthographic, lexical and sentence level.

All these components which are, or may be, common to reading and writing performances are thus potential sources for the correlation between the two abilities.

Relationship between reading and writing

There exist a vast number of studies into the relationship between reading and writing, conducted from different perspectives. Most of these studies originate from an educational context and are concerned with efficiency and authenticity of teaching of the two abilities for reasons of curriculum design (Shanahan, 2009). Also in the context of integrated assessment of literacy, information about the relation between reading and writing is relevant (Chapelle, Enright, & Jamieson, 2011). One of the guiding questions in the aforementioned, educationally oriented, studies is whether reading ability facilitates writing development, which would be in line with the idea that reading development precedes writing development, or, conversely, whether writing ability facilitates reading development following the idea that writing development is sufficient and includes reading development. A third and most plausible possibility is of course that reading and writing interact in their development. Shanahan and Lomax (1988) developed three models corresponding to these three views: reading-to-write, writing-to-read, and combined interaction of the two abilities. The models not only included the two abilities, but relevant subskills as well, constituting a chain of more or less autoregressive relations: word analysis > vocabulary > comprehension, and spelling > vocabulary diversity > syntax > story structure. These models were fitted to performance data of two selected groups: beginning (N = 69) and advanced (N = 137) readers, respectively, who were selected from a pool of second and fifth graders. It turned out that the differences in fit between the models were relatively small, especially for the smaller sample of beginners. The interactive model described the relations best for the advanced group; the writing-to-read model described the data of the beginning writers slightly better than the other two models, but differences were not significant. Remarkably, writing had very specific operationalizations in the study, such as syntax (T-unit length) and story structure (various story grammar units), which might have affected the outcomes. The cross skill paths in the models (i.e., regressions) are rather weak and the only substantial and significant paths occur at the level of word analysis, spelling and vocabulary.

Abbott, Berninger and colleagues also conducted a number of studies into the relationships between reading and writing (sub) skills (Abbott et al., 2010; Berninger, Cartwright, Yates, Swanson, & Abbott, 1994; Berninger et al., 2002; Berninger & Abbott, 2010). Abbott et al. (2010) studied the reading and writing development of two cohorts of primary school children, covering grade 1–7. Their findings corroborate the earlier findings of Shanahan and Lomax (1988), since they report relatively strong autoregressive relations, that is reading and writing performances showed strong regressions on performances of the same skill a year earlier, but only weak cross-skill regressions, that is, weak from reading on writing, and weak from writing on reading. At the word level, they found some cross skill regressions between spelling and word reading. These studies take a developmental perspective in the sense that they investigate whether, for example, reading ability explains a student’s progress in writing ability, and they have as their target population relatively young children in the early stages of their literacy development. In an earlier study, Berninger et al. (1994) explored to what extent reading and writing drew on common and/or unique cognitive systems, such as motor system (for hand writing), orthographic, phonological and working memory systems as well as verbal intelligence. Their conclusion, based on a series of multiple regression analyses, was that reading and writing use common systems as well as unique parts of the systems. However, differences in psychometric reliability were not taken into account and may have caused differences in correlations and thus in regressions. Furthermore, equal (low) regression coefficients of a predictor variable (subskill) does not necessarily imply that this predictor variable explains the correlation between reading and writing.

To summarize, reading and writing proficiency are found to be correlated, at different stages of their development. However, it is still unclear to what extent constituent components, i.e., language subskills, can be considered the source of the correlations. These potential sources can be linguistic and non-linguistic declarative knowledge, but correlations between reading and writing can also originate from fluent access to these knowledge resources processing language. Fluency in both reading and writing processes may facilitate reading and writing performances, respectively, and thus create correlation between the two skills. At the same time, we must bear in mind that fluency of processing might be more skill-specific, because of the aforementioned directionality of the processes. The potential sources of correlation (declarative knowledge and fluent processing) may be more important in foreign language reading and writing than in L1, given that the execution of FL reading and writing processes is even more dependent on linguistic knowledge and fluency in foreign language use.

The present study

In this study, we investigate to what extent component skills can explain the correlation between reading and writing ability. To this end, we will compute partial correlations between reading and writing, controlling for constituent subskills, and compare them with the bivariate correlation between reading and writing. For example, if reading and writing correlate .50, and the partial correlation, controlling for vocabulary knowledge, is only .10, then the correlation between reading and writing may the result of vocabulary knowledge that both reading and writing appeal to. We will estimate partial correlations and the reduction in correlation, or more specifically, common variance at three grade levels (grades 8, 9 and 10) as the relationship between reading and writing proficiency may differ for successive stage of literacy development. We will conduct separate analyses for students’ native (or dominant) language and English as a foreign language (EFL), as the sources for the reading-writing correlation may be different for the L1 and an FL.

We will address four research questions: (1) To what extent can a single subskill of reading and writing explain the correlation between reading and writing, in other words what is the residual common variance of reading and writing after controlling for the subskill concerned? (2) To what extent can the set of declarative knowledge subskills or the set of processing subskills of reading and writing explain the correlation between reading and writing, in other words what is the residual common variance of reading and writing after controlling for the set of declarative knowledge or set of processing subskills? (3) How do the estimated partial correlations and the corresponding residual common variance estimates compare across L1 and EFL? And (4) How do the estimated partial correlations and the corresponding residual common variance estimates compare across the three grade levels, 8–10?

We will first investigate the explanatory power of single subskills, and in a second stage explore the extent to which a combination of just declarative knowledge or just processing (fluency) subskills can explain all of the common variance between reading and writing.

Methods

Participants

Data were collected within the context of a study into reading and writing ability in secondary education in the Netherlands. Eights schools were recruited in the western part of the Netherlands and their students followed different tracks of secondary education, ranging from vocational to preparing for university. In total 389 students participated in at least some the test administrations at time 1 (Grade 8; 13–14 years of age). Of our sample of students, about 71% indicated that they had a native Dutch background, the other students reported that they spoke another language at home (often in addition to Dutch). However, all students had received their education in the Netherlands and had become literate in Dutch. The other languages students used at home or had acquired first, most often concerned Sranan Tongo, Turkish, Moroccan Arabic, or a Berber language. Although Dutch L2 learners, on average, lag behind their native Dutch peers in Dutch and English reading and writing, it has also been shown that the correlational structure of the subskills in reading and writing can be considered similar for native and nonnative students (Schoonen et al., 2002, Van Gelderen et al., 2003). Furthermore, the nonnative students had visited the same schools and had received the same (reading and writing) instruction as their Dutch peers. Therefore, we will treat the native and nonnative students as one sample in the remainder of this article. When we refer to Dutch as the L1, in contrast to English as the FL, we acknowledge that for 29% of the participants Dutch was not their L1, but their dominant language and the language in which they have become literate.

In grade 8, the students were in their second year of secondary education and had been taught English for about 1.5 year for 2–4 h a week. Before that, they were already familiarized with oral English communication in the final two years of primary education.

Instruments

Students performed several reading and writing tasks in Dutch and English in three successive years. They also performed tests to measure subskills in Dutch and English, both tests of linguistic knowledge and linguistic processing fluency, and they responded to a questionnaire tapping into their metacognitive knowledge of reading and writing strategies and text characteristics. Below we will provide brief descriptions of the assignments and tests used. Extensive descriptions can be found in Van Gelderen, Schoonen, Stoel, De Glopper, and Hulstijn (2007) and Schoonen, Van Gelderen, Stoel, Hulstijn, and De Glopper (2011).

Writing ability was measured by means of three writing assignments per language per measurement wave. Assignments were designed to be similar, but not identical across the two languages. Across time, new assignments were administered and one assignment was repeated to maintain comparability over time. Of each set of three assignments, two required handwritten texts and one a computer-written one. Panels of two trained raters rated all texts. Each rater gave a holistic rating according to what could be called a ‘primary trait’, that is whether the text fulfilled its primary discourse goal. This rating was conducted with the support of a scale consisting of five anchors, i.e., examples of texts, representing an average text, and texts one or two standard deviations below and above average, respectively. The instructions avoided reference to specific features of the texts such as vocabulary or sentence structure, since that would cause circularity with the subskills measured. Interrater reliability ranged from .81 to .90, generalizability ranged from .55 to .81 (Schoonen et al., 2011; Schoonen, 2012).

For the measurement of reading ability, students had to read several short texts and answer multiple-choice questions. Questions tapped into the understanding at paragraph or discourse level. Texts were derived from previous research and were age-appropriate, which means that over the years some easier texts were replaced by slightly more difficult texts. However, it turned out that enough texts and corresponding items could be used for all three grades. Therefore, test scores are based on texts and items that were used in all three measurement waves. Reliability (i.e., internal consistency) ranged from .78 to .82 for Dutch and from .81 to .87 for English (Van Gelderen et al., 2007).

Vocabulary knowledge was measured in a multiple choice vocabulary test. Target words were presented in carrier sentences to prevent ambiguity, but these sentences did not provide clues for the meaning of the words. In the Dutch test students had to choose from four alternative descriptions or synonyms of the meaning, in the EFL test there was a choice between four Dutch translations of the target word. Selected words came from different frequency bands of word frequency lists of the language, and were not specifically related to the topics of the writing assignments nor the texts of the reading tests. For Dutch, 59 items remained the same over the years, and they showed reliabilities in the range of .89–.92; for English 35 items were used at all measurement waves, providing reliabilities of .89–.90.

Grammatical knowledge was measured with a fill-in-the-blanks test. The test focused on morpho-syntactic phenomena in the languages, such as tense and aspect, agreement and use of auxiliaries. Items for the English tests were derived from textbook analyses of EFL teaching. The students were cued by the uninflected forms of the target words. The number of items that remained identical across the measurement waves was not very large, 19 for Dutch and 25 for English, but another 25 and 35 items respectively, that had undergone some formal adaptations were included as well. The reliabilities were acceptable: .67–.76 and .89–.91, respectively. Differences in reliability will be accounted for in the analyses (see below).

Orthographic (or spelling) knowledge of the students was measured in a similar way as grammatical knowledge. Students had to choose between several spelling options for a word in a carrier sentence. In this case, the sentence cued the target word to avoid ambiguities. In the test for EFL, the Dutch translation of the target word was provided. The items focused on well-known (Dutch) spelling problems for Dutch students and in the English test on cases where the grapheme-phoneme correspondence is not transparent, also using Castley (1998) as a source. The number of invariant items across measurement waves is again limited: 27 and 12 items respectively; however, another 50 and 35 items respectively that were only reformatted, were maintained as well. Reliabilities were reasonably good: .72–.77 and .74–.76, respectively.

Students’ metacognitive knowledge was assessed by means of a questionnaire. This implies that we measured what students know about texts, reading and writing, but not necessarily what they actually do when reading or writing. This questionnaire was not language specific, it addressed issues in reading and writing in general and a number of items asked about reading and writing in a foreign language. The format was that of statements about reading and writing strategies and text features students could agree or disagree with. For example, “The order is which you present the information in your text is usually not relevant.” From this questionnaire, 54 items remained the same across measurement waves. These items showed an internal consistency in the range of .80–.88.

Besides the tests for declarative linguistic and metacognitive knowledge, the students were also submitted to four processing or fluency tests per language per measurement wave: two receptive measures and two productive measures, two at the lexical level and two at the sentence level. In all cases, the measures used in the analyses are average reaction times (RTs) for the tests involved. Students performed these tests on laptops in classroom setting.

Students’ word recognition speed was measured by means of a lexical decision task. Students had to decide whether a letter string (3–8 characters) formed a word or not in Dutch or English, respectively. The non-words were orthographically possible pseudowords. RT and accuracy were registered, and only RTs for correct, positive decisions (hits) were included in the analyses. For Dutch there were 58 items available, for English 44. Incidental incorrect responses were treated as missing values (Schoonen et al., 2011; Van Gelderen et al., 2007). Reliabilities ranged from .95 to .96 for Dutch and was .94 for English at all three occasions.

As a productive counterpart at the lexical level, students did a timed lexical retrieval task. Students had to “name” pictures of objects or persons as quickly as possible by pressing the first letter of the corresponding word. Students’ response times were corrected for their key board fluency, which was measured separately. The words depicted were meant to be easy words in order to avoid a confound with vocabulary knowledge. In the English test, despite piloting, some students had problems with providing the right answer. Only items with high percentages correct were used for the measurement of lexical retrieval speed. For Dutch 37 and for English 18 items were used. The reliability of the (average) RT across these items ranged from .91 to .93 for Dutch and from .85 to .87 for English.

A sentence verification task measured the speed with which students were able to decide whether a sentence made sense or not. The non-sensical sentences were clear violations of common general knowledge, for example, “Most bicycles have seven wheels”. The test aims at what Carver (1990) calls rauding, that is reading fluently with direct understanding. RTs were averaged over sentences that made sense and that were recognized as such by the students. For Dutch 31 items were involved and for English 20. Reliabilities ranged from .96 to .97 for Dutch and from .93 to .95 for English.

The productive counterpart is sentence construction. Students had to read a sentence beginning and then had to decide as fast as possible which of two continuations fitted the sentence beginning best. Sentences were relatively simple to increase the likelihood of hits. Only one continuation was grammatically correct. For Dutch 29 items were identical across the three measurement waves, for English 21 items remained the same. These items constituted reliabilities in the range of .94 to .96 for Dutch and .92 to .94 for English.

Procedures

Tests were administered by trained test assistants in classroom settings. The order of the test administrations was quasi-random in the sense that we also had to take into account the schedules of the schools and teachers. The computer tests were administered with identical laptop computers that the test assistants brought into the class. The other tests were paper-and-pencil tests. Furthermore, the scheduling of the various tests was such that no possibly interfering tests were administered in the same session (for example, a Dutch and English writing assignment). The test administration at a school was spread out over a few weeks, again depending on schools’ schedules. The three measurement waves took place in the spring season and had an interval of approximately a year.

Analyses

Given the large number of measures collected, missing data are unavoidable, i.e., missing data at the level of items and tests, as well as drop-out of students. For the paper-and-pencil tests, incidentally skipped items were scored as incorrect. If a participant missed more than half of the test, the test was scored a missing value. For the RT-measures, only response latencies of correct, positive responses are included. Misses and outlying responses were converted to missing values. Very fast responses were considered invalid as well as very slow responses. Based on a pilot the cut-off for too fast was 550 ms for the two lexical tests and 650 ms for the two sentence-level tests. The cut-off for too slow was 3 standard deviations above the item’s mean RT. If a student had valid responses on more than half of the items, missing RTs at the item level were replaced by estimated scores using an expectation maximization procedure (Acock, 1997; Hox, 1999), if not, the test score was considered missing. This treatment of missing and false responses, combined with sample attrition due to absence, change of school or dropout led to an average sample attrition across variables of 9.9% at grade 8, 22.0% at grade 9 and 38.9% at grade 10. The missing data are not completely at random (MCAR), but are likely to be at random (MAR), implying that the missing scores are predictable on basis available scores.

The research questions will be addressed by estimating partial correlations in structural equation models (SEM), using Lisrel 8.80. SEM analyses are based on full information maximum likelihood (FIML) (Muthén, Kaplan, & Hollis, 1987), which makes use of all available data and assumes missing data to be missing at random (MAR). FIML estimations prevent listwise deletion of participants with missing test scores (Jöreskog & Sörbom, 1996). An important reason to use SEM is that latent variables can be used, implying that measurement error is partialled out and thus comparisons between correlations and regression coefficients are not confounded by differences in reliability. To estimate latent variables multiple manifest variables are required. To achieve this in our analyses we used parcels of items to indicate the latent variable, with the parcels being based on a random split of the test items into two parallel parcels, except for writing ability where we used the three assignment scores as the manifest variables. The parceling reduces the number of observed variables, which otherwise would become too large for our sample size, and at the same time allows us to estimate a latent variable that can be considered error-free. The basic descriptives for the observed variables can be found in Appendix 1.

The research questions shall be addressed by estimating and comparing partial correlations, that is residual correlations between reading and writing once we have partialled out a (set of) subskill variable(s). First, we will investigate the explanatory power of single subskills in explaining the correlation between reading and writing. We have chosen to perform separate analyses per subskill per language per grade, because we want to avoid that the analysis of one subskill is confounded by the effect of another subskill. Van Gelderen et al. (2007) and Schoonen et al. (2011) attempt to model reading and writing, respectively, in terms of subskill variables. More practically, the number of variables involved in more complex modeling of both reading and writing would easily exceed the number of variables that can be analyzed given our sample size. Our approach implies that the SEM analyses are descriptive rather than hypothesis testing. This also applies to the second set of analyses in which we will estimate the residual correlation after controlling for the set of declarative knowledge tests and the set of processing fluency measures, respectively.

These two sets of analyses provide us with insights in why reading and writing correlate, and the possibly different roles of knowledge and fluency variables in this correlation. The outcomes will be compared across the two languages (L1 and EFL) and across grades, going from grade 8 to grade 10. For the partial correlations, we will report the correlation between the residuals of reading and writing, with reading and writing predicted by the component variable(s) concerned (see Fig. 1, Preacher, 2006). Both the explained variance in reading and writing (\( r_{R *C}^{2} \) and r 2 W* C, respectively) and the residual common variance (as squared partial correlation, r 2 R* W. C) will be included in the results. Point of reference for the partial correlation is the correlation between reading and writing without any partialling out involved. The latent reading and writing variables are standardized to variance of 1.

Fig. 1

What do you believe is the relationship between reading writing and literacy?

Schematic structural models for the comparison of the correlation between the latent variables reading and writing (rR*W, left) and the partial correlation, after partialling out a component subskill (rR*W.C, right). Observed variables are not depicted. See Preacher (2006) for a full graphical model representation

Full size image

Results

First, we report the correlations between reading and writing as found for the two languages and three measurement waves (see Table 1; full correlation matrices of the latent variables can be found in Appendix 2). Second, we will report the results of the analyses of both Dutch and English per grade, prioritizing the comparison between languages.

Table 1 Correlations (rR*W) between reading and writing (as latent variables) at different grade levels, both for Dutch as dominant language and English as a foreign language (EFL)

Full size table

It is remarkable that in the early years of secondary education the correlation between reading and writing in EFL is higher than it is in Dutch. As the students grow older and become more proficient in English, the correlation in EFL drops into the same range as it is for Dutch. This seems to indicate that in the early years linguistic knowledge causes more common individual differences in English literacy skills than later on when other factors such as encyclopedic knowledge may become more distinctive. The six correlations in Table 1 are the target correlations that we want to explain in terms of the common subskills or component variables. We start with the .67 for Dutch (r 2 R* W = .45) and .81 for English (r 2 R* W = .66) in grade 8.

Grade 8

Table 2 summarizes the results of the analyses for Dutch reading and writing (column 2–4) and English reading and writing (column 5–7) at grade 8. It shows that the common variance between reading and writing (i.e., squared target correlation: r 2 R* W) reduces substantially if we control for the linguistic knowledge components or metacognitive knowledge, but less so if we control for the processing measures of lexical or syntactic fluency (RTs).

Table 2 Grade 8: common variance of component variables and writing (\( r_{W*C}^{2} \)) and reading (\( r_{R*C}^{2} \)), respectively, and common variance of reading and writing, after controlling for component variable(s) (\( r_{R*W.C}^{2} \))

Full size table

For Dutch, it shows that the language knowledge tests have more in common with reading and less so with writing. The fluency tests have little common variance with either reading or writing, and therefore there is only a small reduction of the common variance between reading and writing when we control for the fluency measures. The two variables at the sentence level do slightly better than those at the lexical level do.

The four fluency tests as a set together reduce the common variance from .45 to .34. The four knowledge tests achieve a reduction to .10. Both grammatical knowledge and metacognitive knowledge can each explain (i.e., reduce) the common variance in reading and writing from .45 to .16, which is equivalent to a partial correlation of .40.

The general pattern is similar for EFL, but reading and writing are more strongly correlated as we saw in Table 1 and the subskills seem to be more prominent in this correlation, especially the language knowledge components. Surprisingly, EFL vocabulary knowledge does not explain that much of English reading or writing and thus of the common variance between the literacy abilities. The lexical fluency measures do slightly better than they did for Dutch and explain a small amount of the common variance in Reading and Writing. In general, EFL writing is more strongly related to the English subskills than writing in Dutch is to the Dutch subskills. The four English fluency tests together reduce the common variance in Reading and Writing from .66 to .53; the four knowledge tests achieve a reduction to .15.

Grade 9

Table 3 shows the results of the same analyses for the data of the same students one year later in grade 9. The correlation between reading and writing seems to be fairly stable at .65 for the Dutch data (42% common variance), but has dropped for EFL to .73 (53% common variance). The question is again: can this common variance be attributed to common components in language knowledge or processing fluency.

Table 3 Grade 9: common variance of component variables and writing (r 2 W* C) and reading (r 2 R* C), respectively, and common variance of reading and writing, after controlling for component variable(s) (r 2 R* W. C)

Full size table

The role of language knowledge in the dominant language Dutch is largely the same as in the previous year. The knowledge variables (as a set) in grade 9 are slightly stronger correlated with Writing than they were the year before (common variance .64 vs .48), but overall, the remaining (squared) partial correlation for the four knowledge variables is quite comparable to grade 8 (.09 vs .10). The same kind of stability of the findings shows for the Dutch fluency measures. The common variance in Reading and Writing after accounting for the set of fluency measures has dropped to .28, which was .34 at grade 8. The measures at the level of sentences processing, again, perform slightly better than those at the lexical level do.

Even though the common variance between English reading and writing is substantially lower than in grade 8, the subskills still explain a similar amount of this communality, which implies that the drop in reading-writing correlation seems unrelated to the role of the subskills. Grammatical and orthographical knowledge partial out a substantial part the common variance, reducing it from .53 to .11 and .19, respectively. The four declarative knowledge measures together can almost fully account for the communality between EFL reading and writing at grade 9, leaving just 1% common variance unexplained. The fluency measures are less successful in explaining the correlation between EFL reading and writing. The four measures together make the squared correlation drop from .53 to .37. The measures at sentence level, sentence verification and sentence building, account for most of this drop.

Grade 10

Another year later, in grade 10, the correlations between Reading and Writing in Dutch and EFL appear to be in the same range. The squared correlations between the subskills and Reading and Writing, and the squared partial correlation between Reading and Writing for grade 10 are reported in Table 4.

Table 4 Grade 10: common variance of component variables and writing (r 2 W* C) and Reading (r 2 R* C), respectively, and common variance of reading and writing, after controlling for component variable(s) (r 2 R* W. C)

Full size table

It shows that the knowledge variables in Dutch do reasonably well, except for orthographic knowledge. Orthographic knowledge is only weakly correlated with reading and thus is not in a good position to explain any overlap between reading and writing. Still, all the declarative knowledge can explain most of the common variance between reading and writing (from .46 to .03). The fluency measures can explain only a small part of this communality (from .46 to .35). Fluency at the sentence level is only moderately correlated with Reading (r 2 R* C = .17 and .15, for Sentence verification and Sentence construction, respectively) and more strongly with Writing (r 2 W* C = .36 and .38 respectively). However, this does not allow these fluency measures to account for a large portion of the communality between Reading and Writing.

The difference between Dutch and EFL seems to have disappeared at grade 10. Major differences are that of the knowledge variables, both grammatical and orthographic knowledge correlate strongly with writing, and vocabulary knowledge relatively weakly with reading. However, the four declarative knowledge variables can completely account for the correlation between Reading and Writing (as was almost the case for Dutch at grade 10 and English grade 9). The results of the EFL fluency measures are quite similar to those of the Dutch fluency measures. The EFL fluency variables at grade 10 reduce the common variance between Reading and Writing from .40 to .28, which is a modest amount. Fluency measures show a lower (squared) multiple correlation with reading than in previous years (.16 vs .27 and .28), which makes it comparable to the relationship between Dutch fluency and Reading, and may have resulted in a somewhat smaller reduction of the correlation between Reading and Writing. However, we have to bear in mind that the communality between EFL Reading and Writing is substantially smaller than in previous years (see Table 1).

Discussion and conclusion

The results presented in Tables 2, 3 and 4 allow us to address the research questions. In all cases, the linguistic and metacognitive knowledge and fluency variables are correlated to reading and writing ability, respectively. Partialling out these component variables leads to a reduction of this correlation to a small or larger extent (Research Question 1). However, there are large differences between the variables with respect to the explanatory power they have for explaining the correlation between reading and writing. General observations that can be made are that the knowledge measures are in a far better position to explain the correlation than the fluency measures are. We would like to argue that having the appropriate linguistic and metacognitive knowledge provides a rich source from which both reading and writing processes will draw, and which thus causes a correlation between reading and writing proficiency. Both models of reading (Perfetti et al., 2005) and models of writing (Hayes, 1996) include linguistic and metacognitive knowledge as resources in reading and writing, respectively. The fluency measures are not in a similar position; in general, these measures explain a notably smaller part of the common variance between reading and writing. If we try to relate this to the aforementioned processing models, we can picture that fluency in using linguistic sources may be confined to a certain direction of the information flow. For example, processing visual input (text) to semantic meaning, as we do in reading may be different from translating meaning (a concept or predication) into visible output (text) as in writing, or at least each direction may have its own pace, i.e., being fluent in one direction does not necessarily imply fluency in the other direction. When we consider fluency to be more modality or proficiency specific, this may explain the poorer performance of the fluency measures in our analyses of the common variance of reading and writing. Within the set of fluency measures those at the sentence level were more successful in explaining common variance than those at the lexical level. Lexical fluency might not be much of an issue in our participants’ reading and writing, especially when we realize that in writing they can avoid words that are less well accessible. Integrating the information of a full sentence or constructing a sentence may be more critical to both reading and writing for the secondary school students.

In reading and writing, we use all the linguistic resources available to us, more or less at the same time. Therefore, we explored whether we could explain the common variance between reading and writing with sets of enabling skills (Research Question 2). With the four measures of declarative knowledge we were able to explain a large amount of the common variance between reading and writing, and in some cases it was (close) to a full ‘explanation’. In those cases where there was no full explanation by the knowledge subskills, fluency measures could not improve the explanation noticeably (explorations not reported here).

Comparing Dutch and EFL results (Research Question 3), it shows that in the early years of secondary education reading and writing in EFL are more strongly correlated than they are in Dutch and that gradually the correlation drops to the level of the L1 correlations. Linguistic knowledge and processing fluency constitute a substantial part of the EFL correlation, that is the residual correlation is (substantially) lower than the uncorrected correlation. The set of declarative knowledge shows high correlations with reading and even higher with writing and can account for most of the common variance between reading and writing. As for the fluency measures, they correlate more strongly with the two literacy abilities in EFL than in Dutch, but the results are still average. Fluency seems more related to writing in EFL than in Dutch.

Comparing the results for the three grades (Research Question 4), we see that—in general terms—results are quite stable for Dutch. For English the correlation between Reading and Writing is much stronger in the early years than later on. This higher correlation can largely be explained by the linguistic knowledge resources. However, in grade 8 there still is some residual covariance between reading and writing, but this is hardly the case at grade 9 and 10. It appears that the pattern for English develops in the direction of the Dutch pattern. (Appendix 3 provides a graphical representation of the reduction in common variance per single subskill variable for three grades.) The decline of the correlation between FL reading and writing corroborates findings in a study by Csapó and Nikolov (2009), who found that the correlations for EFL and German as a foreign language (GFL) in large samples of Hungarian students dropped in the course of grade 6 to grade 12 from .715 to .455 in EFL and from .635 to .566 in GFL (Csapó & Nikolov, 2009). The correlations in the Hungarian study are generally somewhat lower than they are in this study (Table 1); this might be due to the use of (error free) latent variables in SEM in the current study which prevents attenuation of correlations.

Methodological issues

The numbers we reported in Tables 2, 3 and 4 seem to be relatively low, but we have to bear in mind that we report variances, i.e., squared (partial) correlations. Furthermore, the differences we found between subskills in their potential to explain the common variance between reading and writing cannot be caused by differences in reliability of these subskill variables, because all analyses were conducted with latent variables, which are considered to be error free. Obviously, the SEM approach does not and cannot account for differences in validity. The speed measures are supposed to measure the level of processing fluency, and although the subprocesses measured may not be identical to the processes as they occur in reading and writing performances, yet we have to recognize words and comprehend sentences in our reading, and we have to come up with words for concepts and think about possible continuations of a sentence we just started producing in our writing. The overlap between subprocesses measured and targeted is such that problems in executing these subprocesses would show in the measurements. However, results show that the correlations with reading and writing were weak to moderate, and thus the potential to explain the correlation between reading and writing was very limited.

As for the knowledge resources, the measures were conventional tests including multiple choice and filling-the-gap questions. Although one could question the (ecological) validity of these formats, it turned out that the tests tapped into linguistic and metacognitive knowledge that seems to be involved in both reading and writing as the variables correlated strongly with the literacy abilities and were able to explain the communality between reading and writing to a large extent, if not completely. The knowledge tests were largely based on recognition, more than on recall of the correct linguistic forms (Laufer & Goldstein, 2004). Nevertheless, the knowledge tests tended—in general—to correlate at least as high with writing as they did with reading, suggesting that the linguistic and metacognitive knowledge tested was determinative in the scores of both reading and writing.

All tests but one were administered in the language of the literacy abilities concerned. Metacognitive knowledge was tested in Dutch only, but used in the analyses of both Dutch and English. Correlations of metacognitive knowledge and Dutch or English literacy were in the same ranges, so there does not seem to be a language bias in this test.

To summarize, in this study we took a different approach to the question about the relationship between reading and writing. We investigated what Shanahan (2016) calls the theoretical model, that is the model of the “shared knowledge and cognitive substrata”. We provided a more direct picture of the correlation between reading and writing and the potential role of subskills that can explain this correlation. The question posed in the title of this article can be answered affirmatively. Reading and writing seem to be built on the same skills, especially linguistic and metacognitive knowledge resources. This study did not address the question whether reading-to-write or the writing-to-read view should prevail, but it is in line with the more interactive view. Both literacy abilities interact with each other and none of the two can be given priority over the other; both use—to a large extent—the same language knowledge resources and students’ reading and writing most likely will benefit from expanding these language resources.

References

  • Abbott, R. D., Berninger, V. W., & Fayol, M. (2010). Longitudinal relationships of levels of language in writing and between writing and reading in grades 1 to 7. Journal of Educational Psychology, 102(2), 281–298. https://doi.org/10.1037/a0019318.

    Article  Google Scholar 

  • Acock, A. C. (1997). Working with missing values. Family Science Review, 10(1), 76–102.

    Google Scholar 

  • Ahmed, Y., Wagner, R. K., & Lopez, D. (2014). Developmental relations between reading and writing at the word, sentence, and text levels: A latent change score analysis. Journal of Educational Psychology, 106(2), 419–434. https://doi.org/10.1037/a0035692.

    Article  Google Scholar 

  • Bereiter, C., & Scardamalia, M. (1987). The psychology of written communication. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Berninger, V. W., & Abbott, R. D. (2010). Listening comprehension, oral expression, reading comprehension, and written expression: Related yet unique language systems in grades 1, 3, 5, and 7. Journal of Educational Psychology, 102(3), 635–651. https://doi.org/10.1037/a0019319.

    Article  Google Scholar 

  • Berninger, V. W., Abbott, R. D., Abbott, S. P., Graham, S., & Richards, T. (2002). Writing and reading: Connections between language by hand and language by eye. Journal of Learning Disabilities, 35, 39–56. https://doi.org/10.1177/002221040203500104.

    Article  Google Scholar 

  • Berninger, V. W., Cartwright, A. C., Yates, C. M., Swanson, H. L., & Abbott, R. D. (1994). Developmental skills related to writing and reading acquisition in the intermediate grades. Shared and unique functional systems. Reading and Writing: An Interdisciplinary Journal, 6, 161–196. https://doi.org/10.1007/BF01026911.

    Article  Google Scholar 

  • Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 1061–1071. https://doi.org/10.1037/0033-295X.111.4.1061.

    Article  Google Scholar 

  • Breetvelt, I., Van den Bergh, H., & Rijlaarsdam, G. (1994). Relations between writing processes and text quality: When and how? Cognition and Instruction, 12(2), 103–123.

    Google Scholar 

  • Carver, R. P. (1990). Reading rate. A review of research and theory. San Diego: Academic Press.

    Google Scholar 

  • Castley, A. (1998). Practical spelling. New York: Learning Express.

    Google Scholar 

  • Chapelle, C. A., Enright, M. K., & Jamieson, J. M. (Eds.). (2011). Building a validity argument for the Test of English as a Foreign Language™. New York and London: Routledge.

    Google Scholar 

  • Csapó, B., & Nikolov, M. (2009). The cognitive contribution to the development of proficiency in a foreign language. Learning and Individual Differences, 19(2), 209–218. https://doi.org/10.1016/j.lindif.2009.01.002.

    Article  Google Scholar 

  • Deane, P., Odendahl, N., Quinlan, T., Fowles, M., Welsh, C., & Bivens-Tatum, J. (2008). Cognitive models of writing: Writing proficiency as a complex integrated skill. Princeton, NJ: ETS (Research Report RR-08-55). https://doi.org/10.1002/j.2333-8504.2008.tb02141.x.

    Book  Google Scholar 

  • Emig, J. (1971). The composing processes of twelfth graders. Urbana, IL: National Council of Teachers of English.

    Google Scholar 

  • Fitzgerald, J., & Shanahan, T. (2000). Reading and writing relations and their development. Educational Psychologist, 35(1), 39–50. https://doi.org/10.1207/S15326985EP3501_5.

    Article  Google Scholar 

  • Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365–387. https://doi.org/10.2307/356600.

    Article  Google Scholar 

  • Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing. An applied linguistic perspective. London and New York: Longman.

    Google Scholar 

  • Hayes, J. R. (1996). A new framework for understanding cognition and affect in writing. In C. M. Levy & S. Ransdell (Eds.), The science of writing: Theories, methods, individual differences and applications (pp. 1–27). Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Hayes, J. R. (2012). Modeling and remodeling writing. Written Communication, 29(3), 369–388. https://doi.org/10.1177/0741088312451260.

    Article  Google Scholar 

  • Hayes, J. R., Flower, L., Schriver, K. A., Stratman, J. F., & Carey, L. (1987). Cognitive processes in revision. In S. Rosenberg (Ed.), Advances in applied psycholinguistics: Volume 2, reading, writing, and language learning (pp. 176–240). Cambridge/New York: Cambridge University Press.

    Google Scholar 

  • Hoover, W. A., & Gough, P. B. (1990). The simple view of reading. Reading and Writing, 2(2), 127–160. https://doi.org/10.1007/BF00401799.

    Article  Google Scholar 

  • Hox, J. J. (1999). A review of current software for handling missing data. Kwantitatieve Methoden, 62, 123–138. [Quantative Methods].

    Google Scholar 

  • Jöreskog, K., & Sörbom, D. (1996). LISREL 8: User’s reference guide. Chicago, IL: Scientific Software International Inc.

    Google Scholar 

  • Joshi, R. M., & Aaron, P. G. (2000). The component model of reading: simple view of reading made a little more complex. Reading Psychology, 21(2), 85–97. https://doi.org/10.1080/02702710050084428.

    Article  Google Scholar 

  • Juel, C., Griffith, P. L., & Gough, P. B. (1986). Acquisition of literacy: A longitudinal study of children in first and second grade. Journal of Educational Psychology, 78(4), 243–255. https://doi.org/10.1037/0022-0663.78.4.243.

    Article  Google Scholar 

  • Kellogg, R. T. (1994). The psychology of writing. New York: Oxford University Press.

    Google Scholar 

  • Klauda, S. L., & Guthrie, J. T. (2008). Relationships of three components of reading fluency to reading comprehension. Journal of Educational Psychology, 100(2), 310–321. https://doi.org/10.1037/0022-0663.100.2.310.

    Article  Google Scholar 

  • Lai, S. A., Benjamin, R. G., Schwanenflugel, P. J., & Kuhn, M. R. (2014). The longitudinal relationship between reading fluency and reading comprehension skills in second-grade children. Reading & Writing Quarterly, 30(2), 116–138. https://doi.org/10.1080/10573569.2013.789785.

    Article  Google Scholar 

  • Laufer, B., & Goldstein, Z. (2004). Testing vocabulary knowledge: Size, strength, and computer adaptiveness. Language Learning, 54(3), 399–436. https://doi.org/10.1111/j.0023-8333.2004.00260.x.

    Article  Google Scholar 

  • MacArthur, C. A., & Graham, S. (2015). Writing research from a cognitive perspective. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 24–40). New York/London: The Guilford Press.

    Google Scholar 

  • McCutchen, D. (2000). Knowledge, processing, and working memory: Implications for a theory of writing. Educational Psychologist, 35(1), 13–23. https://doi.org/10.1207/S15326985EP3501_3.

    Article  Google Scholar 

  • Muthén, B., Kaplan, D., & Hollis, M. (1987). On structural equation modeling with data that are not missing completely at random. Psychometrika, 52, 431–462. https://doi.org/10.1007/BF02294365.

    Article  Google Scholar 

  • Nicolás-Conesa, F. (2012). Development of mental models of writing in a foreign language context: dynamics of goals and beliefs. Murcia: University of Murcia (Doctoral Dissertation). Retrievable at http://www.tesisenred.net/.

  • O’Reilly, T., & Sheehan, K. M. (2009). Cognitively Based Assessment of, for and as learning: A framework for assessing reading competency. Princeton, NJ: ETS (Research Report RR-09-26). https://doi.org/10.1002/j.2333-8504.2009.tb02183.x.

    Book  Google Scholar 

  • Perfetti, C. A. (1985). Reading ability. New York: Oxford Press.

    Google Scholar 

  • Perfetti, C. A., Landi, N., & Oakhill, J. (2005). The acquisition of reading comprehension skill. In M. J. Snowling & C. Hulme (Eds.), The science of reading: A handbook (pp. 227–247). Malden, MA: Blackwell Publishing. https://doi.org/10.1002/9780470757642.

    Chapter  Google Scholar 

  • Preacher, K. J. (2006). Testing complex correlational hypotheses with structural equation models. Structural Equation Modeling, 13(4), 520–543. https://doi.org/10.1207/s15328007sem1304_2.

    Article  Google Scholar 

  • Schoonen, R. (2011). How language ability is assessed. In E. Hinkel (Ed.), Handbook of research in second language teaching and learning (Vol. II, pp. 701–716). New York and London: Routledge.

    Google Scholar 

  • Schoonen, R. (2012). The validity and generalizability of writing scores: the effect of rater, task and language. In E. van Steendam, M. Tillema, G. Rijlaarsdam, & H. van den Bergh (Eds.), Measuring writing: Recent insights into theory, methodology and practices. Series in writing (Vol. 27, pp. 1–22). Leiden-Boston: Brill.

    Google Scholar 

  • Schoonen, R., Van Gelderen, A., De Glopper, K., Hulstijn, J., Simis, A., Snellings, P., et al. (2003). First language and second language writing: the role of linguistic fluency, linguistic knowledge and metacognitive knowledge. Language Learning, 53(1), 165–202. https://doi.org/10.1111/1467-9922.00213.

    Article  Google Scholar 

  • Schoonen, R., Van Gelderen, A., De Glopper, K., Hulstijn, J., Snellings, P., Simis, A., et al. (2002). Linguistic knowledge, metacognitive knowledge and retrieval speed in L1, L2 and EFL writing. A structural equation modeling approach. In S. Ransdell & M.-L. Barbier (Eds.), New directions for research in L2 writing (pp. 101–122). Dordrecht, Netherlands: Kluwer Academic Publishers.

    Google Scholar 

  • Schoonen, R., Van Gelderen, A., Stoel, R., Hulstijn, J., & De Glopper, K. (2011). Modeling the development of L1 and EFL writing proficiency of secondary-school students. Language Learning, 61(1), 31–79.

    Google Scholar 

  • Shanahan, T. (2006). Relations among oral language, reading, and writing development. In C. A. MacArthur & J. Fitzgerald (Eds.), Handbook of writing research (pp. 171–183). New York and London: The Guilford Press.

    Google Scholar 

  • Shanahan, T. (2009). Connecting reading and writing instruction for struggling learners. In G. A. Troia (Ed.), Instruction and assessment for struggling writers: Evidence-based practices (pp. 113–131). New York: The Guilford Press.

    Google Scholar 

  • Shanahan, T. (2016). Relationships between reading and writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 194–207). New York-London: The Guilford Press.

    Google Scholar 

  • Shanahan, T., & Lomax, R. G. (1988). A developmental comparison of three theoretical models of the reading-writing relationship. Research in the Teaching of English, 22(2), 196–212.

    Google Scholar 

  • Shiotsu, T., & Weir, C. J. (2007). The relative significance of syntactic knowledge and vocabulary breadth in the prediction of reading comprehension test performance. Language Testing, 24(1), 99–128. https://doi.org/10.1177/0265532207071513.

    Article  Google Scholar 

  • Stanovich, K. E. (1991). Word recognition: Changing perspectives. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P. D. Pearson (Eds.), Handbook of Reading Research II (pp. 418–452). New York/London: Longman.

    Google Scholar 

  • Tillema, M. (2012). Writing in first and second language. Empirical studies on text quality and writing processes. Utrecht: University of Utrecht (Doctoral Dissertation). Retrievable at http://dspace.library.uu.nl/handle/1874/241028.

  • Van Gelderen, A., Schoonen, R., De Glopper, K., Hulstijn, J., Simis, A., Snellings, P., et al. (2004). Linguistic knowledge, processing speed and metacognitive knowledge in first and second language reading comprehension. A componential analysis. Journal of Educational Psychology, 96(1), 19–30. https://doi.org/10.1037/0022-0663.96.1.19.

    Article  Google Scholar 

  • Van Gelderen, A., Schoonen, R., De Glopper, K., Hulstijn, J., Snellings, P., Simis, A., et al. (2003). Roles of linguistic knowledge, metacognitive knowledge and processing speed in L3, L2 and L1 reading comprehension: A structural equation modeling approach. International Journal of Bilingualism, 7(1), 7–25.

    Google Scholar 

  • Van Gelderen, A., Schoonen, R., Stoel, R. D., De Glopper, K., & Hulstijn, J. (2007). Development of adolescent reading comprehension in Language 1 and Language 2: A longitudinal analysis of constituent components. Journal of Educational Psychology, 99(3), 477–491.

    Google Scholar 

  • Vellutino, F. R., Tunmer, W. E., Jaccard, J. J., & Chen, R. (2007). Components of reading ability: multivariate evidence for a convergent skills model of reading development. Scientific Studies of Reading, 11(1), 3–32. https://doi.org/10.1080/10888430709336632.

    Article  Google Scholar 

  • Zamel, V. (1983). The composing processes of advanced ESL students: Six case studies. TESOL Quarterly, 17(2), 165–188. https://doi.org/10.2307/3586647.

    Article  Google Scholar 

Download references

What is the relationship between writing and literacy?

Writing and literacy are two subjects which are deeply intertwined; so much so that success at one depends entirely on the success of the other. Becoming a strong writer requires the development of two particular literacy skills representative of a strong reader: print awareness and reading comprehension.

What is writing and relationship between reading and writing?

Since writing is the act of transmitting knowledge in print, we must have information to share before we can write it. Therefore reading plays a major role in writing. At the same time practice in writing helps children build their reading skills.

What is the connection between reading and literacy?

Literacy is not just reading. The chief components of literacy are reading, writing, language, speaking, and listening. As educators, we need to be thinking about what students are reading, writing, and talking about.

What is the relationship between reading writing and language?

There is a fundamental and reciprocal relationship among oral language (listening and speaking), written language, and reading. Initially, reading and writing are dependent on oral language skills. Eventually, reading and writing extend oral language. Young children use oral language skills to learn how to read.