write a research proposal based on 4 research articles about effects on different feedbacks appears among ESL students
Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
1877-0428
© 2013 The Authors. Published by Elsevier Ltd.
Selection and peer-review under responsibility of the Organizing Committee of TTLC2013.
doi: 10.1016/j.sbspro.2014.01.1437
ScienceDirect
TTLC 2013
An Analysis of Written Feedback on ESL Students’ Writing
Kelly Tee Pei Leng*
Taylor’s Business School, Taylor’s University,Subang Jaya, Selangor, Malaysia
Abstract
This paper provides an analysis of written feedback on ESL students’ written assignment to shed light on how the feedback
acts as a type of written speech between the lecturer and student. It first looks at two sources of data: in-text feedback and overall
feedback written by the lecturer on the students’ written assignment. Looking at how language is used in its situational context,
the feedback was coded and a model for analysis was developed based on two primary roles of speech: directive and expressive.
Based on this analysis, the paper discusses the type(s) of feedback that benefit students the most. This study provides insights as
to how the student felt with each type of feedback. It also provides insights into the possibility of developing a taxonomy of good
feedback practices by considering the views of the giver and receiver of written feedback.
© 2013 The Authors. Published by Elsevier Ltd.
Selection and peer-review under responsibility of the Organizing Committee of TTLC2013.
Keywords: Written feedback;ESL students;taxonomy of good feedback
1. Introduction and literature review
Since the early 1980’s, researchers and reviewers have been investigating response to high school students’
writing undergraduate students’ writing (Brannon & Knoblauch, 1982; Faighley & Witte, 1981; Hillocks, 1986; Ziv,
1984). These studies reported that written feedback provides a potential value in motivating students to revise their
draft (Leki, 1991; Saito, 1994; Zhang, 1995) and in improving their writing (Fathman & Whalley, 1990; Ferris,
1995; Ferris et al, 1997). As a result, written feedback is the most popular method that teachers use to interact and
communicate with students (Cohen & Cavalcanti, 1990; Fathman & Whalley, 1990; Ferris, 1995, 2002; Hyland &
Hyland, 2001). It has been suggested by Straub (2000) that teachers should create the feel of a conversation by
* Corresponding author. Tel.: +6-035-629-5662
E-mail address: PeiLeng.Tee@taylors.edu.my
Available online at www.sciencedirect.com
© 2013 The Authors. Published by Elsevier Ltd.
Selection and peer-review under responsibility of the Organizing Committee of TTLC2013.
Open access under CC BY-NC-ND license.
Open access under CC BY-NC-ND license.
http://crossmark.crossref.org/dialog/?doi=10.1016/j.sbspro.2014.01.1437&domain=pdf
http://crossmark.crossref.org/dialog/?doi=10.1016/j.sbspro.2014.01.1437&domain=pdf
http://creativecommons.org/licenses/by-nc-nd/3.0/
http://creativecommons.org/licenses/by-nc-nd/3.0/
390 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
writing comments in complete sentences; by avoiding abstract, technical language and abbreviations; by relating
their comments back to specific words and paragraphs from the students’ text, by viewing student writing seriously,
as part of a real exchange.
Feedback can be viewed as an important process for the improvement of writing skills for students (Hyland,
1990; Hyland & Hyland, 2001). This is because written feedback contains heavy informational load which offers
suggestions to facilitate improvement and provides opportunities for interaction between teacher and student
(Hyland & Hyland, 2006). Feedback can be defined as writing extensive comments on students’ texts to provide a
reader response to students’ efforts and at the same time helping them improve and learn as writers (Hyland, 2003).
The teacher provides feedback to enable students to read and understand the problems and use it to improve future
writing. Thus, written feedback is used to teach skills that are able to help students improve their writing. At the
same time, it is hoped to assist students in producing written text which contains minimum errors and maximum
clarity.
In order for feedback to be effective, students’ must be provided with effective feedback. Effective feedback is
feedback that is focused, clear, applicable, and encouraging (Lindemann, 2001). When students are provided with
this type of feedback, they are able to think critically and self regulate their own learning (Nicol & Macfarlane-Dick,
2006; Strake & Kumar, 2010). Thus, it is understood that feedback acts as a compass which provides a sense of
direction to the students and tells that writing goals are achievable.
Feedback is particularly important to students because it lies at the heart of the student’s learning process and is
one of the most common and favourite methods used by teachers to maximise learning. But, little attention has been
given to the specific types of responses teachers give their students in relation to speech acts and the extent to which
students find these helpful. Therefore, this study investigates the types of feedback and its usefulness according to
speech acts.
1.1. Theoretical framework
This study uses a combination of two frameworks of speech acts which are Speech Act Theory by Searle (1969)
and Language Functions by Holmes (2001). Holmes (2001) categorised language into six language functions,
which are: directive, expressive, referential, metalinguistic, poetic and phatic. Similarly, Searle (1969) also
categorised speech by its illocutionary acts and categorised these into five illocutionary acts, which are
representatives (assertive), directives, commissives, expressives and declarations (performatives).
These two theories give a clear justification to classifying feedback as a form of communication between the
provider and the receiver of the feedback. Using the lens of this stance, this study suggests that providing useful and
effective feedback based on the speech functions may essentially enhance the communicative functions of feedback.
In order to provide effective feedback to students, lecturers need to understand what types of feedback are useful in
students’ writing and also students’ opinion of different types of feedback.
1.2. Purpose of the study
The purpose of this study is to explore the types of feedback which are beneficial to the students. Furthermore,
the study also investigated the students’ responses towards the types of feedback which are beneficial students in
terms of speech function and how language is used in feedback. The questions that guided this study were as
follows:
What type of feedback did the students receive from their lecturer?
What were the students’ responses to the various type of feedback?
391 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
1.3. Limitations of study
The first limitation was that the study focused only on written feedback on ESL academic writing. Although
some of the results may be applicable to oral feedback, the findings and interpretations of this study should be
considered in the context of written feedback.
The second limitation in this study is the overlapping of categories in the coding of feedback types which
appears problematic in most studies that categorise types of feedback. This presents a challenge to any researcher
conducting a detailed study on the types of written feedback. In order to minimize this problem, the following steps
were taken: the feedback types were carefully coded using the framework from Holmes (2001) and Searle (1969),
consulted with members of my peer-debriefing group to validate each criterion, and the coding was randomly
checked with two independent raters.
The third limitation of this research was that it did not take into account the writers’ revised work because the
research did not look at the gain score of the students and what changes had been done in their revised essays.
Instead it looked at the usefulness of the written feedback in terms of speech acts and aspects of writing. Thus, the
researcher could not compare between the first draft and the final draft in order to see the changes applied in the
students’ final draft based on the feedback.
2. Methodology
2.1. Context
The present study was conducted in a writing skills course at a private university in Selangor, Malaysia. The
course was a compulsory subject offered to undergraduate students and the reason this class was chosen because
students were asked to complete a written assignment (1000-1200 words) which involved drafting and revising
based on their lecturer’s feedback. The duration of the course was one semester which lasted for 15 weeks.
Throughout the course, students were exposed to different theoretical models of writing and had to compare and
contrast different written discourse systems before applying the principles of effective writing to enhance readability
in their written text by focusing on signaling, signposting, and topic strings.
2.2. Participants
The participants of this study were 15 Malaysian students and they were Malay, Chinese and Indian. The students
were a mixed-gender between the ages of 19 to 20 years old. In terms of language, for some of the participants,
English is their first language while for the others English is their second language. The students were in their first
year of their studies (first semester).
2.3. Data Collection
The data for this study was obtained from two research sources: (1) written drafts and (2) interviews with the
students. These two sources are important in this study as it provided detailed information on the usefulness of each
type of feedback.
2.3.1. Written Drafts
The drafts of the research paper were collected from both lecturers once they had finished commenting on them
which was in week 10; copies of the research papers were made and were returned to the respective lecturers within
a period of two-days. In the drafts, the lecturers provided students with written feedback on how to improve their
research paper. Two types of feedback were provided: in-text feedback and overall feedback. The in-text feedback
included all comments written by the lecturer in the text and it was mostly written in the margin of the text. The
feedback given was considered as spontaneous thoughts of the lecturers and it acted as a dialogue between the
392 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
students and their lecturers. The overall feedback was in the form of a letter like text. For the overall feedback, both
lecturers summarized their main concerns and put forth a more general feedback on the written draft. The in-text and
overall feedback was transcribed word for word in order to have a comprehensive list of the lecturers’ comments.
2.3.2. Interviews
The interview took place in week 16 of the semester after their feedback was compiled from written drafts. Each
interview lasted between 20 to 30 minutes. The interviews were audio-recorded and field notes were taken by hand.
During the interviews, the students had their original research drafts with them while the researcher had photocopies
of it. This made it easier to discuss their responses to specific comments and cross-reference their revisions, based
on the suggestions made by their lecturers. The interviews were later transcribed verbatim for analytical purposes.
3. Development of a Model for Feedback Analysis
The study was guided by the constant comparative method set out by Glaser and Strauss (1967) by considering
open, axial, and selective coding strategies (Strauss & Corbin, 1998). Analysis occurred at the same time as data
collection. The data from the written text was arranged and coded into categories. The feedback were categorised on
how the feedback was given according to speech acts functions (Table 1).
First, the coding categories for speech acts framework were identified through the reading of the written text.
The main functions of the feedback types were derived from the speech acts /language functions and the sub-
categories were adapted from earlier studies (see Ferris et. al., 1997; Kumar & Stracke, 2007). The in-text and
overall feedback were read through individually to develop a system of categorization. In order to develop an
appropriate categorization, it took several rounds of individual categorization followed by intensive discussions with
two other post-graduate students and a senior lecturer until a consensus on an appropriate categorization model was
reached. The data was analysed based on what the comments did to the students, hence it was appropriate to analyse
the feedback based on the coding of the two functions of speech: directive and expressive (Holmes, 2001; Searle
1969).
Table 1. Feedback Categories for Speech Act Functions
Main Function Subcategory Examples
Directive
instruction Preview your points here.
clarification How does this support your stand? Make it clear to your reader.
Expressive
approval Well supported with the literature.
disapproval I’ve stopped reading here as I don’t see a flow of argument!
4. Findings and discussions
4.1. Overview of the feedback
The findings from the written drafts indicate that two forms of feedback which were commonly received by the
students are directive and expressive feedback. A total of 366 instances of feedback were found from the students’
written drafts. The majority of the written feedback fell into the directive category (77%) (see Table 2). Directive is
an act which commits the receiver of the message to do something (Holmes, 2001; Searle, 1969). The remaining
feedback fell into the expressive category (23%) and expressive is an act of the speaker which expresses his/her
feelings (Holmes, 2001; Searle, 1969).
Table 2. Distribution of Feedback Based on Speech Act Functions
Categories Number of feedback Percentage (%)
Directive 280 77
Expressive 86 23
Total 366 100
Note: Percentages may not add to 100 or exceed 100 due to rounding. This is applicable to all the
tables in this document that include frequencies.
393 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
In this study, the students found directive feedback to be useful and they liked it most compared to the other
categories of feedback. Directive feedback is specific and well-focused. The feedback the students received were
mostly directive in nature, telling students exactly how to improve their writing: ‘Structure your argument –
heading/sub-heading to improve readability’ and ‘Preview your main points here’ are examples. It can be concluded
that the students themselves were unskilled students and they valued explicit feedback. This finding concurs with
Ziv (1984) study which found that students learning to write need specific directions from their teachers on how to
progress and meet their writing goals.
However, the finding of this study differed from what previous response theorists suggest as best practice
(Lunsford, 1997; Sommers, 1982; Straub, 1996, 2000). It has been suggested that teachers should write fewer
directive comments and embrace facilitative comments instead because facilitative comments give students more
control, ownership, and responsibility (Lunsford, 1997; Sommers, 1982; Straub, 1996, 2000). But this is not the case
with these students as the feedback provided the students with developmental experiences as they were able to
revise their essays based on the feedback given as the feedback made them aware of their weaknesses and strengths
of their writing skills.
4.2. Breakdown of the sub-categories of feedback
Table 3 shows the breakdown of the sub-categories of directive and expressive feedback. Five sub-categories of
feedback were evident from the data which are directive-instruction, directive-clarification, expressive-approval, and
expressive-disapproval.
Table 3: Frequency of Sub-Categories of Feedback
Types of Feedback Number of feedback Percentage (%)
Directive-instruction 191 52
Directive-clarification 89 24
Expressive-disapproval 69 19
Expressive-approval 17 5
Total 366 100
Note: Percentages may not add to 100 or exceed 100 due to rounding. This is applicable to
all the tables in this document that include frequencies.
4.2.1. Directive: Instruction
The most commonly received feedback was directive-instruction feedback (52%), see Table 3. Instruction
feedback instructs students to make changes which are necessary for the text. They found directive-instruction to be
useful in their revision as directive-instruction provided them a sense of direction because they knew exactly what
was needed to be corrected. One of the student mentioned that “I feel very happy because my lecturer provides me a
way on how I can improve my writing when she said like, ‘tell me what Big Five means, then explain how it
concerns to the matter you described”. So he is like in a way trying to tell me how to revise what I have written
before and see whether the ideas are related to this particular paragraph.” Thus, this clearly shows that feedback
offers a sense of direction to the student (Hyland & Hyland, 2006). The students also mentioned that they knew
what and where had they gone wrong in their writing and how they can improve it through instruction feedback as
one of them said, “She highlighted the things which are not right and told me how to correct the work”. This
supports Hattie & Timperley (2007) claim that a teacher who provides effective feedback is one who highlights
information about how the writer can progress or proceed with the task. It also further supports Ogede’s (2002) view
that directive, specific comments save students from a “gloomy future” (p. 108). He also argues that directive
comments are effective because students need their teachers to share their knowledge about effective writing by
telling in clear, certain terms that “rigorous commentary holds the key to the needed remedial action… the instructor
cannot afford to leave the students with an impression that the suggestions offered to improve their writing are
optional” (p. 108).
394 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
4.2.2. Directive: Clarification
The second most common type of feedback was directive-clarification feedback (24%), see Table 3.
Clarification feedback is comments that seek further information from the students in terms of asking for a clearer
explanation of what ideas have already been mentioned in the paper. Directive-clarification feedback provided
specific directions to students on how to revise their essays. The writers understood what was being addressed in
clarification feedback and were clear on what they were supposed to do upon reading the clarification feedback.
This supports Straub’s (1997) study which found that students preferred comments which are “specific, offer
direction for revision, and come across as help” (p.112). Most clarification feedback begins with a question
followed by a short explanation on what was wrong with the sentence or paragraph. Examples of clarification
feedback are ‘How does this support your stand- make clear to the reader’ and ‘Why do you think all these are
effective – there are also researches who indicate the negative effects of group work’. It also supports Lindemann’s
(2001) claim on effective feedback which should be focused, clear, applicable, and encouraging. Hyland & Hyland
(2006) mentioned that in order for improvement to take place, feedback should be loaded with information. Thus, it
can be concluded with Ryan‘s (1997) view on lecturer’s feedback that the feedback helped the writers to understand
how well they were writing and how they might further develop their writing.
4.2.3. Expressive: Disapproval
Expressive-disapproval feedback was the third commonly provided feedback (19%). The students in this study
valued disapproval feedback, which highlights the negative points of their essay. They welcomed disapproval
feedback because they found it constructive and it helped them improve their writing; additionally, it also increased
their self confidence in their writing (Goldstein, 2004). One student mentioned that disapproval feedback “…
doesn’t affect me as I’m more concerned about what he thought about my paper” because she believed her lecturer
had the best interest of her writing in mind; hence, she viewed the comments as constructive to her rewriting. This
finding contradicted with the students in Weaver’s (2006) study who reported that receiving too many negative
comments was demoralizing, while the students in Straub’s (1997) study believed the effect of a critical comment
depended on its tone. The students did not mind having problems in their writing pointed out but they were simply
against having them pointed out in highly judgmental, harsh, or authoritative ways. One of the students pointed out
that “His feedback is constructive, so to me this is not damaging” and he mentioned that “this is not something to be
sensitive about because for me I take criticism positively. If it is good for me then I should be able to accept it”. On
the contrary, this finding supports Button’s (2002) study which argued that students appreciate and benefit from
constructive criticism. In Button’s study, she found that her students benefitted from constructive criticism as they
students consistently identified their best learning experiences as those that challenged them beyond their current
abilities. As a result of this, the students realised that feedback itself is a process of discovery as they were able to
discover new meaning from disapproval feedback.
4.2.4. Expressive: Approval
Expressive-approval feedback was the least received type of feedback by the students (5%). Approval feedback
refers to feedback which highlights the strength of the essay drafts. The students in this study valued approval
feedback because it provided them a dose of motivation in their rewriting. One of the students mentioned that “I
didn’t know that I could write, since this is my first semester. And I will remember the good things which I’ve done
in this paper and apply them for my future writing”. Approval feedback motivated the students in their revision and
showed them what was working and what was not working in their paper. A student highlighted that “Ok, this is like
a plus point …. and I’m quite glad that he actually pointed out not only the weaknesses on this paper but he also
pointed out the strength” when she received approval feedback from her lecturer. This substantiated Bardine’s (1999)
view that students use positive feedback to help them select effective aspects of their text which they can model
after for future writing. In Bardine’s (1999) study, he exposed how the students who received positive feedback on
their papers gave them the opportunity to see what they were doing well and enabled them “to reproduce successful
parts of papers in future drafts and essays” (p. 7). When the students were able to produce successful drafts, it
boosted their confidence and increased their enjoyment of writing. This clearly shows that the feedback provided
“information about the gap between the actual level and the reference level of a system parameter which is used to
395 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
alter gap in some way” (Ramaprasad, 1983, p.4). It also further supports Beedles and Samuels (2002) study, which
found that a few of their surveyed students considered praise helpful in their writing. Similarly, Gee (2006)
discovered that students who received praise increased their confidence, pride, and enjoyment in their work. Praise
feedback does inspire and motivate writers to write better as teachers often have the potential to motivate students to
revise their drafts (Leki, 1991) and improve their writing skills (Fathman & Whalley, 1990; Ferris, 1995).
5. Conclusion
The findings from this study clearly indicated that the written feedback provided to the students was helpful and
useful in their essay revision. The reason was that the feedback was clear, direct, and information loaded. Hence, the
feedback offered a sense of direction to the students (Hyland & Hyland, 2006). The feedback was also effective to
the students because they were able to attend to the revision of their second draft well which further supports Hattie
and Timperley (2007) claim that effective feedback provided with the correct load of information can impact a
student in the revision process. The feedback provided not only was clear and effective, but it also alerted the
students about their current writing skills and how the feedback can further develop their writing (Ryan, 1997). The
students were able to advance with their essay revision because they were provided with constructive feedback
which inspired them to revise better and at the same time, build their self confidence in writing (Goldstein, 2004).
Secondly, the element of motivation was also present in this study. Motivation is an important feature of
feedback in the concept of active learning (Butler, 1988). The lecturer’s feedback inspired and motivated the
students to write better because a lecturer often has the potential to motivate students to revise their drafts (Leki,
1991) and improve their writing skills (Fathman & Whalley, 1990; Ferris, 1995). This indicates that feedback and
motivation works hand in hand. In this study, the lecturer’s feedback played an important role in motivating and
encouraging the students to revise through constructive feedback. The constructive feedback inspired them to write
better revised drafts; hence, increasing their self confidence in their writing (Goldstein, 2004).
Lastly, the feedback also enhances self-regulated learning (Nicol & Macfarlane-Dick, 2006). Self-regulated
learning seems to take place when the student receives feedback on a draft from the lecturer, and he/she is expected
to revise and make the relevant amendments based on the written feedback that was provided. The written feedback
gave them new ideas and made them understand what the lecturer wanted in an essay that reflects their ideas clearly.
It should be noted that feedback offers a sense of direction to the writer (Hyland & Hyland, 2006). Therefore, it can
be argued that without well directed feedback, the students may not have been able to comprehend the feedback and
achieve their writing goal which is to produce an improved version of their essay. It can be concluded that the
written feedback provided has a great impact on the students’ writing and also on their attitude towards writing
(Leki, 1990).
6. Implications
Three implications emerged from this study and they are based on what the students in this study found both
useful and lacking in the written feedback. The implications are to write enough information in the feedback, to
provide instruction feedback and to provide specific praise feedback.
Firstly, lecturers could write enough information in their comments. When lecturers give feedback, they should
“say enough for students to understand what you mean” (Lunsford, 1997, p.103). This clearly shows that in order for
the feedback to be effective, the lecturers must provide feedback which is information loaded in order for the
students to respond and act on it (Hyland & Hyland, 2006a).
Secondly, lecturers could provide instruction feedback when providing feedback to students. It is found in this
study that the writers liked directive-instruction feedback as they benefitted much from it and gave them a sense of
direction (Hyland & Hyland, 2006). As students would like to know exactly what is working and what is not
working in their paper (Ogede, 2002).
396 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
Thirdly, lecturers could provide approval feedback which is specific. As discovered in this study, some of the
students did not know the reason why their lecturer praised their writing. Therefore, lecturers should provide
specific praise to encourage students to know what they did well in the paper and use it for future writing and boost
their confidence in writing (Straub, 1997).
In addition to the above implications, a need for training in the area of providing effective feedback should be
provided in order for lecturers to provide effective feedback to their students. Universities could provide lecturers
with workshops and talks on providing effective feedback to students. As this study shows that written feedback
assisted the students in their essay revision and they wanted written feedback which are specific and information
loaded.
References
Bardine, B. A. (1999). Students’ perceptions of written teacher comments: What do they say about how we respond to them? High School
Journal, 82(4), 1-12.
Beedles, B., & Samuels, R. (2002). Comments in context: How students use and abuse instructor comments. In O. Ogede (Ed.), Teacher
commentary on student papers: Conventions, beliefs, and practices (pp. 11-20). Westport, CT: Bergin & Garvey.
Brannon, L., & Knoblauch, C. H. (1982). On students’ rights to their own texts: A model of teacher response. College Composition and
Communication, 33(2), 157-166.
Butler, D. L. (1988). Enhancing and undermining intrinsic motivation: the effects of task-involving and ego-involving evaluation on interest and
involvement. British Journal of Educational Psychology, 58, 1-14.
Button, M. D. (2002). Writing and relationship. In O. Ogede (Ed.), Teacher commentary on student papers: Conventions, beliefs, and practices
(pp. 57-62). Westport, CT: Bergin & Garvey.
Cohen, A. D., & Cavalcanti, M. C. (1990). Feedback on compositions: Lecturer and student verbal report. In B.Kroll (Ed.), Sec ond Language
Writing (pp. 155-177). Cambridge, UK: Cambridge University Press.
Faighley, L., & Witte, S. (1981). Analysing revision. College Composition and Communication, 32, 400-414.
Fathman, A., & Whalley, E. (1990). Teacher response to student writing: Focus on form versus content. In B. Kroll (Ed.), Second language
writing: Research insights for the classroom (pp. 178-190). Cambridge: Cambridge University Press.
Ferris, D. R. (1995). Student reactions to teacher response in multiple-draft composition classrooms. TESOL Quarterly, 29(1), 33-53.
Ferris, D. R. (2002). “Treatment of error in second language student writing.” The University of Michigan Press.
Ferris, D. R., Pezone, S., Tade, C. R., & Tinti, S. (1997). Teacher commentary on student writing: Descriptions & implications. Journal of Second
Language Writing, 6(2), 155-182.
Gee, T. C. (2006). Students’ response to teacher comments. In R. Straub (Ed.), Key works on teacher response: An anthology (pp. 38-45).
Portsmouth, NH: Boynton/Cook.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine De Gruyter.
Goldstein, L. M. (2004). Questions and answers about teacher written commentary and student revision: teachers and students working together.
Journal of Second Language Writing, 13(1), 63-80.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Hillocks, G. (1986). Research on written composition: New directions for teaching. Urbana, IL: ERIC Clearinghouse on Reading and
Communication Skills and The National Conference on Research in English.
Holmes, J. (2001). Speech functions, politeness and cross-cultural communication. In G. Leech & M. Short (Eds.), An Introduction to
Sociolinguistics (2nd ed., pp. 258-283). Malaysia: Longman.
Hyland, F., & Hyland, K. (2001). Sugaring the pill: Praise and criticism in written feedback. Journal of Second Language Writing, 10(3), 185-212.
Hyland, K. (1990). Providing productive feedback. ELT Journal, 44(4), 279-285.
Hyland, K. (2003). Second language writing. Cambridge: Cambridge University Press.
Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing (Vol. 39). New York: Cambridge University Press.
Kumar, V., & Stracke, E. (2007). An analysis of written feedback on a PhD thesis. Teaching in Higher Education, 12(4), 461–470.
Leki, I. (1990). Coaching from the margins: Issues in written response, In B.Kroll (Ed.), Second Language Writing, (pp. 57-68). Cambridge, UK:
Cambridge University Press.
Leki, I. (1991). The preferences of ESL students for error correction in college-level writing classes. Foreign Language Annals, 24, 203-218.
Lindemann, E. (2001). A rhetoric for writing teachers (4th ed.). New York: Oxford University.
Lunsford, R. F. (1997). When less is more: Principles for responding in the disciplines. New Directions for Teaching and Learning, 69, 91-104.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback
practice. Studies in Higher Education, 31(2), 199 – 218.
Ogede, O. (2002). Rigor, rigor, rigor, the rigor of death: A dose of discipline shot through teacher response to student writing. In O. Ogede (Ed.),
Teacher commentary on student papers: Conventions, beliefs, and practices (pp. 103-118). Westport, CT: Bergin & Garvey.
Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4-13.
Ryan, K. (1997). Lecturer comments and student responses. Directions in Teaching and Learning, 69, 5-13.
Saito, H. (1994). Teachers’ practises and students’ preferences for feedback on second language writing: A case study of adult ESL learn ers.
TESL Canada Journal, 11, 46-70.
Searle, J. R. (1969). Speech acts. An essay in the philosophy of language. Cambridge: Cambridge University Press.
Sommers, N. (1982). Responding to student writing. College Composition and Communication, 33(2), 148-156.
397 Kelly Tee Pei Leng / Procedia – Social and Behavioral Sciences 123 ( 2014 ) 389 – 397
Strake, E., & Kumar, V. (2010). Feedback and self-regulated learning: Insights from supervisors’ and PHD examiners reports. Reflective
Practises, 1, 19-32
Straub, R. (1996). The concept of control in teacher response: Defining the varieties of ” directive” and” facilitative” commentary. College
Composition and Communication, 47(2), 223-251.
Straub, R. (1997). Students’ reactions to teacher comments: An exploratory study. Research in the Teaching of English, 31(1), 91-119.
Straub, R. (2000). The student, the text, and the classroom context: A case study of teacher response. Assessing Writing, 7(1), 23-55.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Thousand
Oaks, CA: Sage.
Weaver, M. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education,
31(3), 379-394.
Zhang, S. (1995). Reexamining the affective advantage of peer feedback in the ESL writing class. Journal of Second Language Writing, 4, 209-
222.
Ziv, N. D. (1984). The effect of teacher comments on the writing of four college freshmen. In R. Beach & L. S. Bridwell (Eds.), New directions
in composition research (pp.362-380). New York: Guilford Press.
Teacher and student perceptions of second language writing feedback:
A survey of six college ESL classes and their teachers
A Scholarly Paper
Submitted in Partial Fulfillment of
The Master’s Degree in Second Language Studies
Department of Second Language Studies
University of Hawai‘i at Mānoa
First Reader: Lourdes Ortega
Second Reader: J.D. Brown
May,
2
00
9
Abstrac
t
Research on second language writing has focused on feedback practices and student revision
processes. The purpose of this study is to examine students’ and teachers’ feelings and thoughts
in regards to written feedback, and to compare these perceptions with teacher self-assessments.
Both teachers and students in university level English as second language (ESL) writing courses
were surveyed about their perceptions of teacher written feedback. Results indicated that they
showed no particular preference for any single type of feedback and that students were generally
satisfied with the type and amount of feedback that they were given. Additionally, both teachers
and students placed the burden of error correction on each other. These and other findings are
discussed in light of the context and suggest that teachers should be aware of their students’
perceptions when employing their feedback approaches.
2
Background for the Present Study
Providing effective feedback is one of the many challenges that any writing teacher faces.
In a second language classroom, feedback practices can be even more challenging; in addition to
organization and punctuation problems, grammar feedback is also a concern. Teachers and
students agree that teacher written feedback is a crucial part of the writing process (Cohen &
Cavalcanti,
19
90). Teachers want to give feedback that will encourage and challenge students to
be better writers, but do not always know how the feedback that they are providing is perceived
by the students, or how effective it is. Since reading student work and giving feedback is a very
time-consuming process, teachers may feel frustrated when the feedback they offer is not
followed by the students. Even when the teacher’s system for giving feedback is clear and
consistent, oftentimes teachers do not know whether students understand their practices. This
study examines teachers’ perceptions of feedback in the form of error correction and follow-up
practices and compared these with students’ perceptions and beliefs about these practices. In a
survey of
4
7
students and six teachers in a university English as a second language setting, I
explored several questions about the feedback amount, type, beliefs, and the degree of
satisfaction. The purpose of this paper is to examine and compare the relationship between
students’ and teachers’ perceptions of written feedback in the second language classroom.
Does Feedback Matter?
There has been much debate among researchers on second language writing about the
effects of different kinds of feedback. One of the hottest issues in the past fifteen years has been
whether grammar feedback is either necessary or helpful for L2 learning. As one main opponent
of grammar feedback, Truscott (199
6
) concluded that all forms of error correction of L2 student
3
writing are ineffective and should be abandoned. Ferris (1999) countered Truscott’s argument by
delineating the ways that learners use feedback to improve their writing. While this debate is
interesting, most writing teachers give both grammar and content feedback to their students.
Whether or not grammar feedback is effective, students expect it and believe that it will help
their writing (Hyland, 199
8
; Casanave,
20
03).
Other research investigated other aspects of feedback, such as the effects of manipulating
the type of feedback given by teachers (Bitchener, Young, & Cameron, 200
5
; Leki, 1991). ESL
writing teachers’ actual response practices were examined and compared with research of L1
writing teachers practices (Zamel, 1985). More recently, researchers have called into question
the methods for researching writing (Guénette, 2007; Truscott, 2007) and called for researchers
to be more exact in their methods.
Ferris (1997) introduced a new approach to research in this area. The approach made
connections between teacher feedback and the revisions the students made as a result. Ferris did
not manipulate the type of feedback given, but instead classified comments made by the teacher
according to length, functional type, and use of hedges. Revisions made by students were rated
according to whether they were substantive or minimal and also whether they had a positive or
negative effect. Ferris found that marginal requests for information and summary comments on
grammar appeared to lead to the most substantive revisions. Ashwell (2000) used Ferris’ model
to test Zamel’s (1985) hypothesis that two or more drafts are an important part of the writing
process as a whole. In using this method, Ashwell examined whether content followed by form is
the best way to provide feedback to students. He found that there is no significant overall
difference in papers that are given form feedback followed by content feedback as opposed to
content followed by form. All this research on the effectiveness of actual feedback practices begs
4
the equally important question of what the specific preferences might be of those receiving and
giving feedback in the classroom, namely students and teachers.
A New Perspective: Perceptions of Feedback
The perspective of writing students has been investigated in several ways such as
students’ preferences and reactions to feedback (Cohen, 1987; Cohen & Cavalcanti, 1990; Ferris,
1995). Studies on students’ perceptions of written feedback have shown that they have strong
opinions and preferences about the amount and type of feedback given by their teachers. Zhang
(1995) found that ESL students greatly value teacher written feedback and consistently rate it
more highly than alternative forms such as peer feedback and oral feedback in writing
conferences.
An important study by Cohen (1987) surveyed 2
17
students in a university setting on the
amount and the effectiveness of teacher-written feedback. He found that students prefer feedback
on local issues like sentence-level feedback such as grammar rather than global feedback such as
end comments. In a similar study, Ferris (1995) surveyed
15
5 students and added to Cohen’s
findings that students pay more attention to feedback given during the writing and revising
process rather than feedback given on a final draft. These findings show students’ strong
preference for local feedback and also demonstrate how much students use this feedback to
improve their writing.
In researching whether students understand feedback in the same way that the teacher
intended it, Hyland (2003) found that students often misunderstood their teachers’ comments or
suggestions. Hyland and Hyland (2001) investigated the role of praise and found that it was often
5
http://micro
18
9.lib3.hawaii.edu:
23
74/science?_ob=ArticleURL&_udi=B6W5F-4PHSRDP-2&_user=989483&_coverDate=06%2F
30
%2F2007&_rdoc=3&_fmt=full&_orig=browse&_srch=doc-info%5C(%23toc%236569%2
32
007%23999839997%23666807%23FLA%23display%23Volume%5C)&_cdi=6569&_sort=d&_docanchor=&_ct=5&_acct=C000049917&_version=1&_urlVersion=0&_userid=989483&md5=98c5
33
dd2f5564c5
22
384697c92ec363#bib
11
%23bib11
http://micro189.lib3.hawaii.edu:2374/science?_ob=ArticleURL&_udi=B6W5F-4PHSRDP-2&_user=989483&_coverDate=06%2F30%2F2007&_rdoc=3&_fmt=full&_orig=browse&_srch=doc-info%5C(%23toc%236569%232007%23999839997%23666807%23FLA%23display%23Volume%5C)&_cdi=6569&_sort=d&_docanchor=&_ct=5&_acct=C000049917&_version=1&_urlVersion=0&_userid=989483&md5=98c533dd2f5564c522384697c92ec363#bib6%23bib6
perceived by students as a way to soften criticism rather than to encourage them to continue
writing.
The above research on student preferences and perceptions about feedback has been the
main focus of research on L2 feedback perceptions. The teachers’ perceptions in the form of self-
assessment or self-report of feedback are rarely studied and only a few have been compared to
the student’s perceptions. There are numerous variables and factors that affect feedback
practices, and recently there have been calls for more research to investigate feedback in terms of
comparing student perceptions with teacher self-assessments and actual teacher feedback
(Goldstein, 2001, 2006).
A seminal study that relates student and teacher feedback perceptions was conducted by
Cohen and Cavalcanti (1990). In examining teachers’ self-assessments with student perceptions
and actual written feedback in this study in a university EFL setting, they found a strong
relationship between teacher self-assessments and actual performance in all of the categories that
they examined (content, organization, vocabulary, grammar, and mechanics). In an innovative,
more recent study in an EFL context in Hong Kong, Lee (2003) compared teachers’ feedback
beliefs with teachers’ feedback practices. She found that although many teachers believe in
giving selective error correction feedback, most teachers surveyed still mark papers
comprehensively. Lee (2004) also compared teachers and students’ beliefs in Hong Kong. She
employed a similar approach to her first study in researching teacher beliefs, but added the extra
element of comparing teacher beliefs, attitudes, and perceptions to student beliefs, attitudes, and
perceptions. She found that both students and teachers in this context preferred comprehensive
marking and that teachers use only limited strategies in their feedback practices. Even more
recently, Montgomery and Baker (2007) used a similar approach to that of Cohen and Cavalcanti
6
http://micro189.lib3.hawaii.edu:2374/science?_ob=ArticleURL&_udi=B6W5F-4PHSRDP-2&_user=989483&_coverDate=06%2F30%2F2007&_rdoc=3&_fmt=full&_orig=browse&_srch=doc-info%5C(%23toc%236569%232007%23999839997%23666807%23FLA%23display%23Volume%5C)&_cdi=6569&_sort=d&_docanchor=&_ct=5&_acct=C000049917&_version=1&_urlVersion=0&_userid=989483&md5=98c533dd2f5564c522384697c92ec363#bib7%23bib7
(1990) but with a much larger sample size: while Cohen and Cavalcanti used only one teacher
and nine students, this study surveyed 98 students and ten teachers. They found that teachers’
perceptions of the amount of feedback that they give are generally lower than students’
perceptions. In investigating the relationship between the teachers’ beliefs and actual feedback
provided, they found, in agreement with Lee (2003) that teachers may not have provided
feedback in the way that they believed they should.
Purpose
The studies above have investigated several important areas of feedback and have laid the
foundation and opened the door for more research. Hyland (2006) encouraged research to “go
beyond the individual act of feedback itself to consider the factors that influence feedback
options and student responses” (p.
10
). While there has been research on practices, types,
effectiveness, interpretation of feedback, and so on, few studies have been done about the
affective factors that influence feedback, namely the feelings of satisfaction with amount and
type. The present study seeks to build upon the previous research by examining how students
feel about the amount and type of feedback that they are getting and how teachers perceive their
students’ feelings. More precisely, I seek to examine the relationship between teacher self-
assessments and student perceptions of teacher written feedback by examining the following
questions:
1. How similar or different are students’ and teachers’ perceptions in regards to feedback amount
and type?
7
2. How content are students with the amount of feedback that they are receiving, and by
comparison do teachers believe that their students are satisfied or dissatisfied with their
feedback?
3. How favorably are various techniques for the delivery of feedback viewed by students?
4. Whose job is to correct errors, according to students and teachers?
The answers to these questions will help teachers to better understand the effects of their
feedback on students. They may also help to inform teachers about which type of feedback is
more effective in their context.
Method
Context of the Present Study
This study takes place in an ESL context in a university English Language Institute
(ELI). Students at the ELI consist of international and immigrant students for whom English is
not their native language. The main purpose of the ELI is to provide English instruction to
facilitate these students’ academic studies. ELI teachers are mainly graduate-assistant instructors
chosen from MA and PhD candidates in this university’s Second Language Studies department.
ELI classes are semester-long and consist of 2.5 hours of instruction per week. The teachers and
students surveyed were currently teaching or enrolled in an ELI writing class. The survey took
place about three weeks before the end of the semester, so feedback practices were most likely
well-implemented by this point.
In the process of developing this research proposal, it was necessary to get approval from
the Director of the ELI in order to conduct a research project at the ELI. The steps for approval
included reading research that has already been completed at the ELI so as not to create an
8
overlap, and having the research proposal and the instruments (surveys) approved by both the
advising professor and the ELI director. This study was also approved by the university’s
Committee on Human Studies, which included submitting a summary of the proposed research,
the instruments (surveys), and signed approval of the advising professor.
There were seven writing classes being held at the ELI at the time the study was
conducted. All seven teachers elected to participate in the study. I came to each face-to-face class
during the last five minutes of instruction and explained the survey, then returned at the
beginning of the following class to collect the surveys. One of the participating classes was an
online class, and the survey was explained in an email. In analyzing the results from the courses,
I found that feedback practices in this online class were quite different from the others classes
surveyed. Moreover, no response was received from the teacher of the online course and few
responses were received from the students. Since an important aspect of this paper is to compare
the students’ perceptions with the teacher’s, I only consider data from the six face-to-face ELI
writing classes that was collected.
The three writing classes (ELI 73, ELI 83 and ELI 100) surveyed contain students of
varying levels of proficiency, within a range of levels advanced enough to take university classes
(a score of 500 on the paper-based TOEFL is required to enter into the university). ELI 73
consists of a mix of undergraduate and graduate students. ELI 83 is an advanced course for
graduate students only, while ELI 100 is an advanced course for undergraduates. The classes also
have different course objectives. ELI 100 must be taken by non-native speakers of English as an
alternative to English 100, the required English course for undergraduate students at this
university.
9
Table 1
Writing Courses in the ELI
ELI Writing Courses1
Intermediate ELI 73
Advanced ELI 83 – Graduate Students Only
ELI 100 – Undergraduate Students Only
Participants
The participants in the present study include students and teachers from two ELI 100
classes, two ELI 83 classes, and two ELI 73 classes. The predominant first language of the ELI
students surveyed is Japanese, followed by Korean and Chinese. While students in ELI 100 are
fairly similar in age, the age range in graduate-level ELI 83 is a bit more diverse. Because of the
nature of ELI 73 including both graduates and undergraduates, a wide range of ages is
represented in this case as well.
Table 2
Participants by Age and Language
Course Number of Participating
Students
Age
Range
Median Age Native Language Background
73
13
18-37
24
.8 5 Japanese, 3 Korean, 3 Chinese,
1 Tibetan, 1 Arabic
83 19 23-
34
27
.7 6 Japanese, 8 Chinese, 3 Thai, 1
Vietnamese, 1 Bahasa Indonesia
100 15 19-24 20.8 8 Japanese, 3 Korean,
1 Chinese, 1 Cantonese,
1 Portuguese, 1 Swedish
All participating students and teachers provided informed consent (see Appendix A for
consent form). An important measure in this research was ensuring the confidentiality of students
and teachers and making sure that they knew their rights to participate or choose not to
participate with no penalty.
1 Adapted from ELI website: http://www.hawaii.edu/eli/students/newstudents.html
10
Survey Design
The data were elicited by means of a questionnaire based on a hybrid of the surveys used
in Cohen (1987), Ferris (1995), Montgomery and Baker (2007), and Lee (2004). The final
instruments are shown in Appendices B and C. The surveys focused on three areas: feedback
amount, feedback type, and feedback beliefs.
In their questionnaire, teachers were asked to self-assess how much of each type of
feedback (ideas/content, organization, vocabulary, grammar, and mechanics) they gave on
compositions throughout the past semester. They were also asked about their grammar correction
practices and whether students knew how to understand their markings.
The students were asked similar questions to their teacher in the survey. In addition, they
were also asked how much they consider their teachers’ comments on their essays, if they are
satisfied with the amount of feedback they receive, if the teacher uses a correction code, to what
degree they understand the teacher’s correction code, and whose job they feel it is to find and
correct errors.
One important element in these surveys is that often the same questions are asked
separately about both 1st or 2nd drafts and final drafts. Zamel (1985) called these drafts “cycles of
revision” (p. 95). She suggested having stages in the feedback process. Now common practice
amongst writing teachers, there are often at least one or two drafts plus a final version in the
writing process. Therefore, there are a few questions in two parts, for the students and teachers to
differentiate between feedback during the beginning or end of the cycle. Some teachers may
believe that feedback is more or less effective at certain points in the writing cycle, and may
provide different amounts and types of feedback respectively. It should also be noted that the
11
http://micro189.lib3.hawaii.edu:2374/science?_ob=ArticleURL&_udi=B6W5F-4PHSRDP-2&_user=989483&_coverDate=06%2F30%2F2007&_rdoc=3&_fmt=full&_orig=browse&_srch=doc-info%5C(%23toc%236569%232007%23999839997%23666807%23FLA%23display%23Volume%5C)&_cdi=6569&_sort=d&_docanchor=&_ct=5&_acct=C000049917&_version=1&_urlVersion=0&_userid=989483&md5=98c533dd2f5564c522384697c92ec363#bib11%23bib11
http://micro189.lib3.hawaii.edu:2374/science?_ob=ArticleURL&_udi=B6W5F-4PHSRDP-2&_user=989483&_coverDate=06%2F30%2F2007&_rdoc=3&_fmt=full&_orig=browse&_srch=doc-info%5C(%23toc%236569%232007%23999839997%23666807%23FLA%23display%23Volume%5C)&_cdi=6569&_sort=d&_docanchor=&_ct=5&_acct=C000049917&_version=1&_urlVersion=0&_userid=989483&md5=98c533dd2f5564c522384697c92ec363#bib6%23bib6
teachers’ survey had more questions about feedback beliefs, since the teachers decide what type
and how much feedback is appropriate for each class or for each individual student.
Data Collection and Analysis
Out of the 73 surveys distributed to these six classes, I received 47 in return, a response
rate of 64%. The high response rate might be attributed to the fact that in my role as the
researcher I approached the class in person to explain and distribute the survey during class time
and came to collect them at the beginning of the next class. The data was collected and entered
by two researchers, and cross-checked for accuracy. I will use descriptive statistics in presenting
the results of the surveys. In anticipation that the data from these classes would represent
different perspectives, the data will be compared both as a whole, grouped by course, and as
single classes.
The remainder of this paper is devoted to the presentation of my data analysis and to the
discussion of the findings related to my four research questions. As I present and discuss results,
I will highlight interesting findings and draw implications to pedagogy relative to feedback
practices. The paper will conclude with an acknowledgement of the limitations of the study and
some implications.
Results and Discussion
RQ 1: How similar or different are students’ and teachers’ perceptions in regards to feedback
amount and type?
Deciding feedback amount is an important part of the feedback process. As mentioned
above, Cohen (1987) found that students prefer more feedback in certain areas such as grammar
12
and less on global issues. In the present study, teachers were asked to self-assess how much
feedback they gave on compositions throughout the past semester. As shown in Appendix C, the
feedback was divided into types; ideas/content, organization, vocabulary, grammar, and
mechanics. They were asked to choose an amount for each type of feedback that was an average
of the feedback they generally gave to their students. Basically, teachers were asked to estimate
the total amount of feedback given on first and final drafts of their students’ compositions and
rank the amount of feedback on a Likert scale with choices of ‘‘none,’’ ‘‘a little,’’ ‘‘some,’’ and
‘‘a lot.’’ The descriptions were supplemented with percentages that clarified the categories: 0%,
30%, 70%, and 100%. For instance, if teachers thought that they commented on every
grammatical error in a paper, they would mark 100%, if they purposefully marked only some of
the errors, they would mark 70%. Students were also asked to evaluate their teacher’s written
feedback using a similar response format (see Appendix B). The results are shown in Figures 1
and 2.
Figure 1
Feedback Perceptions, 1st or 2nd Drafts
0
1
0
20
30
40
50
60
70
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
Organization Content/Ideas Grammar Vocabulary Mechanics
Feedback Category
P
e
rc
e
n
t
Students
Teachers
13
Figure 2
Feedback Perceptions, Final Drafts
0
10
20
30
40
50
60
70
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
1
0
0
7
0
3
0 0
Organization Content/Ideas Grammar Vocabulary Mechanics
Feedback Category
P
e
rc
e
n
t
Students
Teachers
According to the results, in the 1st, 2nd, and final drafts, students report that they are
getting more feedback than teachers report in giving in the areas of grammar, vocabulary, and
mechanics. For example, as shown in the ‘mechanics’ category of 1st and 2nd drafts, a majority of
students reported that they received “a lot (100%)” or “some (70%)” feedback, whereas all
teachers reported giving only “some (30%)” or “no (0%)” feedback. Since teachers report that
they are giving feedback selectively, we can therefore see that there is a mismatch in the
perceptions of students and teachers about the amount of feedback given and received. It is
interesting to note that this discrepancy in students’ and teachers’ perceptions is not apparent in
the categories of organization and content/ideas. It seems that when asked about these areas of
feedback amount, teachers and students generally agree on the amount that they are giving and
receiving.
This finding is ambiguous in terms of an explanation. It could mean that students think
that they are getting feedback on everything when they are not, and they will assume that all
errors are marked, so that when they fix those errors, their papers will be error free. However,
this explanation is unlikely, judging from other findings in the literature. Specifically,
14
Montgomery & Baker (2007) found that in many cases when teachers’ perceptions were less
than students’ perceptions of written feedback, the teachers were underestimating the amount
that they give, rather than the students overestimating. Likewise, in Lee’s (2004) study, many
teachers were reporting that they gave “selective” feedback, but when actual feedback practices
were examined she found that they were marking comprehensively. Such findings may suggest
that teachers should self-monitor their feedback practices, checking how much feedback they
give.
RQ 2: How content are students with the amount of feedback that they are receiving, and by
comparison do teachers believe that their students are satisfied or dissatisfied with their
feedback?
The survey shows that an overwhelming number of students, 74.5%, are satisfied with the
amount of feedback that they are receiving, while a majority of teachers, 80%, reported that their
students are only “somewhat” satisfied (Figure 3).
Figure 3
Comparing Perceptions of Satisfaction
0
20
40
60
80
100
Yes No Somewhat
Students
Teachers
15
In analyzing the amount of student satisfaction by each individual class, all classes had a
fairly high rate of satisfaction, ranging from 50% to 100%. Four of the six classes had high
response rates that were all within a small range between 71% and 81% students reporting
satisfaction. By contrast, the class each with the highest and the lowest scores for satisfaction
was also a group with the lowest response rates. The extreme and atypical responses are likely to
be related to the small number size of respondents in these two groups.
It is an encouraging statistic that students are mostly satisfied with the amount of
feedback that they are receiving. By comparison, however, more teachers felt that students were
only “somewhat” satisfied with the amount of feedback they gave. I attribute this high number to
the anxiety that many teachers feel about the effectiveness of feedback practices and students’
perceptions of such. This finding reflects the researcher’s perception while conducting the study
that the teachers who participated in the study seemed very concerned with the feelings and
progress of their students.
When asked what they prefer for the teacher to do, 75% of students elected that the
teacher give feedback on “all” errors, while only
21
% preferred teachers to give “some”
feedback. This reveals that students have a strong preference to receive global feedback. This
finding is contrary to some recent research concerning global comments. Leki (2006) suggested
that students reported feeling that they are not receiving enough comments on global issues from
teachers. The present study suggests that these students feel that they are receiving enough
comments on global issues such as ideas, content, and organization, as most students reported
that they received “a lot” or “some” comments in these areas. One fundamental difference
between the present study and Leki (2006) is that Leki was examining students’ perception of
16
regular discipline classes whereas I am examining practices in an ESL classroom. This may
explain some discrepancies in the two study’s findings.
When asked how much progress students were making with semester, majorities of both
the six teachers and their students reported that they were making “some” progress this semester.
To be specific, 65% of students and 66% of teachers feel that students are making some progress.
The perceptions of both students and teachers match, and are relatively positive. This, combined
with the fact that most students are satisfied with the amount of feedback they get, indicates that
students are generally positive about their ELI writing classes.
RQ 3: How favorably are various techniques for the delivery of feedback viewed by students?
One of the primary motivations of this study was to ask which techniques teachers were
using and to draw correlates between practices and student satisfaction. I expected that this
would allow me to show underlying preferences for certain types of feedback over others,
essentially finding which methods were more preferable to both students and teachers.
Specifically, I examined the (self-reported) type of error correction a teacher uses, correction
codes, and a variety of other feedback follow up methods, such a conferences and error
frequency charts and compared these practices with the satisfaction level of the students in
individual classes, and by course number.
Admittedly, the present design did not lend itself to uncovering direct connections
between self-reported teacher practices and the satisfaction level of students. Nevertheless, the
research uncovered that many different methods are used in these writing classrooms. There are
several factors that come into play when teachers choose how to implement their feedback
practices. Most teachers reported that they select the errors that they mark on an ad hoc, case by
17
case basis, taking other factors into consideration such as time constraints and student needs. All
(100%) teachers reported that all three of the following considerations affect their error feedback
techniques; students’ requests, perception of students’ needs, and the amount of time available.
Moreover, 100% of the teachers surveyed reported that they spend more than 20 minutes
marking each composition.
Teachers also responded to a variety of questions about the type of error correction they
use, their follow-up practices, and their beliefs about feedback about the types of feedback
practices that they implement. As shown in Table 3, which reflects how many teachers chose
each answer, error feedback practices vary greatly. Because teachers could indicate as many or
as few practices as they wanted, some of teachers did not mark certain categories. It seems that
most teachers surveyed employ a variety of feedback practices.
Table 3
Teachers’ Error Correction Practices
How often do you use the following error feedback techniques? Never
or
rarely
Sometimes Often or
always
I indicate errors and correct them 33% 50% 17%
I indicate errors, correct them and categorize them 40% 40% 20%
I indicate errors, but I don’t correct them 83% 17%
I indicate errors and categorize them but I don’t correct them 17% 50% 33%
I hint at the location of errors 60% 40%
I hint at the location of errors and categorize them 33% 33% 33%
In addition to the error practices reported in Table 3, 50% of teachers reported using a
marking code to highlight the types of errors that their students are making. The two teachers
18
whose students reported the highest and the lowest percent of satisfaction both reported that they
regularly use a marking code, similar follow-up techniques, and had closely matching beliefs.
Therefore, using a marking code does not seem to raise students’ satisfaction levels in regards to
feedback.
Teachers also report that they use various follow-up methods with feedback, such as
student-teacher conferences, encouraging students to keep an error chart, and going over
common errors in class. Once again, these practices were used in different amounts by different
teachers. When asked about their beliefs, five out of six teachers reported that not all student
errors should be treated equally. These beliefs support the fact that the teachers reported using all
different types of error analysis and follow-up techniques. They probably vary their practices
depending on the student needs, preferences, and error type.
This finding was unexpected because it is contradictory to Lee (2004), from which part of
the survey used in this research was adopted. Whereas she found that teachers use only limited
strategies in their feedback practices, I found that teachers reported using various strategies. One
reason that these findings are contradictory may be because the teachers surveyed in Lee’s study
80% were using a school-mandated correction code. They were told what to do and how to do it
when it came to feedback practices. In stark contrast, none of the teachers involved in the present
study were using a mandated system of feedback practices. These teachers were allowed to select
and implement their own methods, and therefore had a wide range of methodologies. Also, Lee’s
study examined actual practices along with perceptions. If actual practices were studied in this
case, the self-reported behaviors may differ from actual practices.
The students consistently report satisfaction with amount of feedback independent of
which feedback method their teacher practices. In fact, when the data were divided by the three
19
levels of ELI writing (mixed undergraduate and graduate low-level; undergraduate advanced
level; and graduate advanced level), the number of students who were satisfied with their
teachers’ feedback ranged between 66% and 84%, a high percentage. Of the students who
reported that their teacher uses a marking code, 74% said they prefer that the teacher use a
marking code, while only 11% indicated that they would not like their teacher to use a code. Of
the students who reported that their teacher does not use a marking code, 62% said that they
would not like their teacher to use a code while
31
% said that they would like a marking code.
This seems to indicate that students tend to be persuaded by the method that their teacher is
using. This can be supported by Cohen and Cavalcanti (1990) who have claimed that ‘‘learners’
expectations and preferences may derive from previous instructional experiences, experiences
that may not necessarily be beneficial for the development of writing’’ (p. 173).
RQ 4: Whose job is to correct errors, according to students and teachers?
An intriguing finding of this study is that students and teachers place the burden of error
correction on each other. As Table 4 indicates, when asked about whose job it is to locate and
correct errors, a majority of students said that it was the teacher’s job to locate and correct errors,
while only 37% said that it is the student’s job to locate and correct errors. On the other side,
83% of the teachers disagreed that it is the teacher’s job to locate and correct errors. All of the
teachers agreed or strongly agreed with the statement “students should learn to locate and correct
their own errors.”
20
Table 4
Error Correction Responsibility
Responsibility for error correction Students Yes
It is mainly the teacher’s job to locate and correct errors for students. 57%
It is mainly the student’s job to locate and correct their own errors. 37%
Teachers No
It is mainly the teacher’s job to locate and correct errors for students. 83%
Students should learn to locate and correct their own errors. 100%
Students’ and teachers’ beliefs about who should locate and correct errors are quite
different, and indicate that teachers should be aware that students believe that error correction is
the teacher’s duty. If the teacher (for whatever reason) is not going to highlight and correct
errors, I suggest that they explain their beliefs and method to students so they can understand the
teacher’s practices.
Conclusion
This study presents some implications for pedagogy in ESL writing classes. The teachers
in this study felt that students were only somewhat satisfied with the amount of feedback they are
giving. The results of this study suggest that teachers need not have anxiety over feedback
amount and method type. As mentioned earlier, there were no specific methods found to be more
well-received by students. Therefore, teachers should stay with the practices that work for them
and not worry that students will not be satisfied. Furthermore, teachers should self-monitor their
feedback practices and occasionally review whether their error correction methods are aligned
with their beliefs. While a majority of teachers supported the idea that error correction should be
selective, it may be that these teachers still correct globally. Finally, teachers should be made
21
aware that a majority of students place the duty of error correction on their teachers. Once aware
of this, teachers may be able to counteract this perception in class by using more peer feedback
techniques or by helping the students to develop an autonomous review and correction cycle.
As with any study, there are several limitations in the present one. One of the limitations
of this study is that all responses are self-reported. In future research, it would be interesting to
see whether actual practices are consistent with self-reported practices. As stated above, Lee
(2004) found that teachers were less selective in their error correction than they self-reported that
they were. In other words, they believed that their error correction was selective, whereas in
actual practice, it was global. One question that arises from these findings is: are students
satisfied with feedback because they believe that they are receiving more than they are actually
given? Or are teachers giving comprehensive feedback in spite of their beliefs to give selective
feedback? These questions might be answered if actual feedback practices were examined, rather
than only self-reported estimates.
Another limitation is that there are unknown factors that may effect students’ perceptions
of their teachers. It is possible that if students strongly like or dislike their teacher, they will have
the same relationship with the feedback that the teacher gives. Furthermore, the ELI classes that
are being surveyed do not have necessarily homogenous populations. Some students are graduate
students while others are undergraduate students and have probably had very different past
experiences in feedback. Also, some of the courses are pass/fail completion classes and others
are more high-stakes, as students receive credit. As mentioned above, one of the major
limitations of this study is that practices measured were all self-reported. In further research,
actual practices should be compared with self-reported practices. Other possibilities for further
22
research include examining not only student satisfaction level of feedback amount, but other
types of feedback options such as peer review, student-teacher conferences, and self-correction.
Richards (1998) said that ‘‘rather than viewing the development of teaching skill as the
mastery of general principles and theories that have been determined by others, the acquisition of
teaching expertise is seen to be a process that involves the teacher in actively constructing a
personal and workable theory of teaching’’ (p. 65). This study was enlightening because the data
suggest that most students are satisfied with the amount, and generally, the type of feedback they
receive. This has eased my own anxiety about my feedback practices and helped me realize that
there are probably no “best” practices or methods when it comes to feedback methodology.
Additionally, I would be interested in the further examination of teacher beliefs in comparison
with actual practices, and student perceptions of follow-up methods of teacher feedback such as
student-teacher conferences. By better understanding some of these issues, teachers can design
and implement more effective methods in their classrooms and researchers can understand the
complex relationship between teachers and students in the process of feedback.
23
References
Ashwell, T. (2000). Patterns of teacher response to student writing in a multiple-draft
composition classroom: Is content feedback followed by form feedback the best
method?, Journal of Second Language Writing, 9, 227–
25
7.
Bitchener, J., Young, S. & Cameron, D. (2005). The effect of different types of corrective
feedback on ESL student writing, Journal of Second Language Writing, 14,
191–205.
Casanave, C. (2003). Controversies in second-language writing: Dilemmas and decisions in
research and instruction. Ann Arbor: University of Michigan Press.
Cohen, A. (1987). Student processing of feedback on their compositions. In A.L.
Wenden and J. Rubin, Editors, Learner strategies in second language learning
(pp. 57–69), Prentice-Hall, Englewood Cliffs, NJ.
Cohen, A. & Cavalcanti, M. (1990). Feedback on compositions: Teacher and student
verbal reports. In M. Long & J. Richards (Series Eds.) & B. Kroll (Vol. Ed.),
Second language writing: Research insights for the classroom (3rd ed.). New
York: Cambridge University Press.
Ferris, D. (1995). Student reactions to teacher response in multiple-draft composition
classrooms, TESOL Quarterly,
29
, 33–35.
Ferris, D. (1997). The influence of teacher commentary on student revision, TESOL
Quarterly, 31, 315–339.
Ferris, D. R. (1999). The case for grammar correction in L2 writing classes: A response
to Truscott (1996). Journal of Second Language Writing, 8(1), 1–11.
Goldstein, L. (2001). For Kyla: What does the research say about responding to ESL
writers?. In T. Silva and P. Matsuda (Eds.), On second language writing (pp.
73–90), Lawrence Erlbaum Associates, Mahwah, NJ.
Goldstein, L. (2006). Feedback and revision in second language writing: Contextual,
teacher, and student variables. In K. Hyland and F. Hyland (Eds.), Feedback in
Second Language Writing (pp. 185–205), Cambridge University Press,
Cambridge.
Guénette, Danielle. (2007). Is feedback pedagogically correct?: Research design issues in
studies of feedback on writing. Journal of Second Language Writing, 16, 40-53.
Hyland, F. (1998). The impact of teacher written feedback on individual writers. Journal of
Second Language Writing, 7, 255–
28
6.
24
Hyland, F., & Hyland, K. (2001). Sugaring the pill: Praise and criticism in written
feedback. Journal of Second language Writing, 10, 185–212.
Hyland, F. (2003). Focusing on form: Student engagement with teacher feedback. System,
21, 217–230.
Lee, I. (2003). L2 writing teachers perspectives, practices and problems regarding error
feedback. Assessing Writing: An International Journal, 8(3), 216-237.
Lee, I. (2004). Error correction in L2 secondary writing classrooms: The case of Hong
Kong. Journal of Second Language Writing, 13, 285-312.
Leki, I. (1991). The preferences of ESL students for error correction in college-level
writing classes. Foreign Language Annals, 24, 203–218.
Leki, I. (2006). ‘‘You cannot ignore’’: L2 graduate students’ response to discipline-based written
feedback. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing (pp.
26
6–286). Cambridge: Cambridge University Press.
Montgomery, J. & Baker, W. (2007). Teacher-written feedback: Student perceptions,
teacher self-assessment, and actual teacher performance, Journal of Second
Language Writing, 16 (2), 82-99.
Richards, J. C. (1998). Beyond training. Cambridge, UK: Cambridge University Press.
Truscott, J. (1996). The case against grammar correction in L2 writing classes, Language
Learning, 46, 327–369.
Truscott, J. (2007) The effect of error correction on learners’ ability to write accurately.
Journal of Second Language Writing ,16, 255–272.
Zamel, V. (1985). Responding to student writing, TESOL Quarterly, 19, 79–98.
Zhang, S. (1995). Re-examining the affective advantages of peer feedback in the ESL writing
class. Journal of Second Language Writing, 4,209-222.
25
Appendix A: Consent Form
Agreement to participate in a survey of students’ and teacher’s perceptions on feedback
practices in the ELI.
Ann Johnstun
Primary Investigator
398-1156
This research project is being conducted as a component of the SLS 650 Second Language
Acquisition course. The purpose of the project is to survey teachers’ and students’ beliefs and
perceptions about written feedback in the second language classroom and compare the perceptions.
This study intends to help teachers better understand whether their feedback practices are useful to
the students. You are being asked to participate in this survey as a student or teacher of an ELI
writing course.
Participation in the project will consist of answering a short survey. The survey will focus on the
amount of comments you receive / give on written assignments and how useful they are to you. No
personal identifying information will be included with the research results. Completion of the survey,
including some background data questions should take no more than 10 minutes. Others who will
participate in this study include other students and teachers currently in the ELI.
The investigator believes there is little or no risk to participating in this research project. Participating
in this research may have some direct benefits for you, as the results may benefit teachers and future
students of this program.
As compensation for time spent participating in the research project, you will receive a treat after the
survey.
Research data will be confidential to the extent allowed by law. Agencies with research oversight,
such as the UH Committee on Human Studies, have the authority to review research data. All
research records will be stored in a locked file in the primary investigators’ office for the duration of
the research project and will be destroyed upon completion of the project.
Participation in this research project is completely voluntary. You are free to withdraw from
participation at any time during the duration of the project with no penalty, or loss of benefit to which
you would otherwise be entitled.
If you have any questions regarding this research project, please contact the researcher, Ann
Johnstun at 398-1156.
If you have any questions regarding your rights as a research participant, please contact the UH
Committee on Human Studies at (808)956-5007, or uhirb@hawaii.edu
26
mailto:uhirb@hawaii.edu
Appendix B Student Survey
Answer the questions.
1. Where are you from? ______________________
2. What is your native language? _______________________
3. How old are you? _____________________
4. How long have you been in the USA? Years _______ Months ________
5. How long have you been studying at the ELI? Semesters _________
Choose the answer that describes what you think. Choose only one answer.
1. How much of each essay do you read over again when your teacher returns it to you?
1st or 2nd drafts
All of it Most of it Some of it None of it
Final drafts
All of it Most of it Some of it None of it
2. How many of your teacher’s comments and corrections do you think about carefully?
1st or 2nd drafts
All of it Most of it Some of it None of it
Final drafts
All of it Most of it Some of it None of it
3. How many of your teacher’s comments on your essay are about:
1st or 2nd drafts A lot Some A little None
Organization _______ _______ _______ _______
Content/Ideas _______ _______ _______ _______
Grammar _______ _______ _______ _______
Vocabulary _______ _______ _______ _______
Mechanics _______ _______ _______ _______
(punctuation and spelling)
Final drafts A lot Some A little None
Organization _______ _______ _______ _______
Content/Ideas _______ _______ _______ _______
Grammar _______ _______ _______ _______
Vocabulary _______ _______ _______ _______
Mechanics _______ _______ _______ _______
(punctuation and spelling)
Please circle the appropriate answers.
4. Are you satisfied with the overall amount of comments you receive?
a. yes
b. no
c. somewhat
27
d. I don’t know
5. Which of the following is true about your 1st or 2nd drafts?
a. My English teacher underlines / circles all my errors.
b. My English teacher underlines / circles some of my errors.
c. My English teacher does not underline / circle any of my errors.
d. I have no idea about the above.
6. Which of the following is true about your final draft?
a. My English teacher underlines / circles all my errors.
b. My English teacher underlines / circles some of my errors.
c. My English teacher does not underline / circle any of my errors.
d. I have no idea about the above.
7. Before / After marking your essays, does your teacher tell you what error types (e.g., verbs,
prepositions, spelling) he/she has selected to mark?
a. yes
b. no
8. Which of the following do you like best on your 1st or 2nd drafts?
a. My English teacher underlines / circles all my errors.
b. My English teacher underlines / circles some of my errors.
c. My English teacher does not underline / circle any of my errors.
9. Which of the following do you like best on your final draft?
a. My English teacher underlines / circles all my errors.
b. My English teacher underlines / circles some of my errors.
c. My English teacher does not underline / circle any of my errors.
10. Does your teacher use a correction code in marking your essays (i.e. using symbols like V. ,
Adj., etc., or using colors to highlight different errors)?
a. yes
b. no
If your answer to Question 10 is “Yes,” answer Question 11 and 12. If your answer is “No,” go to
Question 13.
11. What percentage of your teacher’s marking symbols (e.g., V, Adj, Voc, Sp) are you able to
follow and understand when you are correcting errors in your essays?
a. 76-100%
b. 51-75%
c. 26-50%
d. 0-25%
12. What percentage of errors are you able to correct with the help of your teacher’s marking
symbols?
a. 76-100%
b. 51-75%
c. 26-50%
d. 0-25%
28
13. Do you want your teacher to use a correction code in marking your essays?
a. yes
b. no
14. After your teacher has corrected errors in your essays, do you think you will make the same
errors again when you get a new writing assignment?
a. yes
b. no
15. Which of the following is true?
a. In this semester, I am making good progress in grammatical accuracy in writing.
b. In this semester, I am making some progress in grammatical accuracy in writing.
c. In this semester, I am making little progress in grammatical accuracy in writing.
d. In this semester, I am making no progress in grammatical accuracy in writing.
16. Which of the following do you agree with?
a. It is mainly the teacher’s job to locate and correct errors for students.
b. It is mainly the student’s job to locate and correct their own errors.
29
Appendix C: Teacher Survey
1. How long have you been teaching at the ELI? Semesters _______
2. Secondary Teaching experience:
Less than 5 years 5-10 years over 10 years
3. How many of your comments on student’s essays are about:
1st or 2nd drafts A lot(100%) Some A little None
Organization _______ _______ _______ _______
Content/Ideas _______ _______ _______ _______
Grammar _______ _______ _______ _______
Vocabulary _______ _______ _______ _______
Mechanics _______ _______ _______ _______
(punctuation and spelling)
Final drafts A lot Some A little None
Organization _______ _______ _______ _______
Content/Ideas _______ _______ _______ _______
Grammar _______ _______ _______ _______
Vocabulary _______ _______ _______ _______
Mechanics _______ _______ _______ _______
(punctuation and spelling)
4. Do you think that your students are satisfied with the amount of comments you give?
a. yes
b. no
c. somewhat
d. I don’t know
5. Which of the statements below best describes your existing error feedback practice on
your students’ 1st or 2nd drafts?
a. I don’t mark students’ errors in writing.
b. I mark ALL students’ errors.
c. I mark students’ errors selectively.
6. Which of the statements below best describes your existing error feedback practice on
your students’ final drafts?
a. I don’t mark students’ errors in writing.
b. I mark ALL students’ errors.
c. I mark students’ errors selectively.
If your answer to Question 5 is “C,” answer Questions 7, 8, and 9. If you have not ticked
“C,” go to question 10.
30
7. Circle the amount of errors you mark.
a. About 1/3
b. About 2/3
c. More than 2/3
8. Which of the following best describes the major principles for error selection?
a. The selected errors are directly linked to grammar instruction in class – e.g. after I
have taught subject-verb agreement, I provide feedback on subject-verb
agreement errors.
b. The selected errors are related to students’ specific needs – e.g. knowing that
students are particularly weak in articles, I provide feedback on article errors.
c. The errors are selected on an ad hoc basis – i.e. I decide what errors to provide
feedback on while I am marking.
d. Others (please specify)
_________________________________________________________________
_______________________________________________________
9. Are your students aware of the type(s) of errors you select to provide feedback on?
a. Yes
b. No
10. Do you use a marking code for providing error feedback on student writing?
a. Yes
b. No
11. Does your school require you to use a marking code?
a. Yes
b. No
12. Rate the frequency with which you use each of the following error feedback
techniques according to the scale below.
How often do you use the following error feedback
techniques?
Never
or
rarely
Sometimes Often
or
always
a. I indicate (underline/circle) errors and correct them
– e.g. has went gone.
b. I indicate (underline/circle) errors, correct them and
categorize them (with the help of a marking code) –
e.g. has went gone. (verb form)
c. I indicate (underline/circle) errors, but I don’t
correct them – e.g. has went.
d. I indicate (underline/circle) errors and categorize
them (with the help of a marking code), but I don’t
correct them – e.g. has went. (verb form)
31
e. I hint at the location of errors – e.g. by putting a
mark in the margin to indicate an error on a specific
line.
f. I hint at the location of errors and categorize them
(with the help of a marking code) – e.g. by writing
‘Prep’ in the margin to indicate a preposition error on
a specific line.
13. What factors influence the error feedback technique(s) you always/ often use?
Factors affecting the error feedback
techniques I always/ often use.
Yes or No?
a. Students’ request – i.e. students ask for it Yes / No
b. My perception of students’ needs Yes / No
c. The amount of time I have Yes / No
d. Others (please specify)
14. What do you usually do after you mark students’ compositions? You can check more
than one box.
What I usually do after marking students’
writing
Rarely Sometimes Often
a. I do not do anything
b. I hold a conference with each student/
some students
c. I make students correct errors in/ outside
class
d. I make students record their errors in an
error frequency chart.
e. I go through students’ common errors in
class
f. Others (please specify)
15. How much time approximately do you spend marking one composition?
a. Less than 10 minutes
b. 10 to 20 minutes
c. More than 20 minutes
16. How would you evaluate the overall effectiveness of your existing error feedback
practice on student progress in grammatical accuracy in writing during this semester?
My students are making
32
a. Good progress
b. Some progress
c. Little progress
d. No progress
17. Indicate the extent to which you agree with the following statements according to
the scale below.
To what extent do you agree with the
following statements?
Strongly
disagree
Disagree Agree Strongly
agree
a. There is no need for teachers to provide
feedback on student errors in writing
b. Teachers should provide feedback on
student errors selectively
c. It is the teacher’s job to locate errors and
provide corrections for students
d. Teachers should vary their error feedback
techniques according to the type of error
e. Coding errors with the help of a marking
code is a useful means of helping students
correct errors for themselves.
f. Marking codes should be easy for students to
follow and understand.
g. All student errors deserve equal attention.
h. Students should learn to locate and correct
their own errors.
i. Students should learn to locate and correct
their own errors.
j. Students should learn to analyze their own
errors.
18. Do you have any concerns and/ or problems providing error feedback on student writing?
Please elaborate.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
______________________________________________________
33
Appendix D: Request for teachers’ permission
Dear ELI writing teacher:
My name is Ann Johnstun, and I am interested in conducting a survey that involves both you
as an ELI writing teacher, and your ELI students. The purpose of the project is to survey teachers’
and students’ beliefs and perceptions about written feedback in the second language classroom and
compare the perceptions.
Participation in the project will consist of answering a short survey for students, and a little
longer survey for teachers. The survey will focus on the amount of comments you give on written
assignments and how useful they are to you. No personal identifying information will be included
with the research results. This survey seeks to answer the following research questions: what is the
relationship between teacher self-assessments and student perceptions of teacher written
feedback? What types of feedback are currently being used? Are students content with the type
and amount of feedback that they are receiving? Do teachers’ feedback practices reflect their
beliefs?
After the study is completed, I intend on sharing my results with you as a teacher so that
you might better understand how your feedback practices are perceived by your students. I will
be contacting you soon to know whether or not you agree to participate in this research.
Participation in this research project is completely voluntary. You are free to withdraw from
participation at any time during the duration of the project.
Thank you,
Ann Johnstun
34
- Ann Johnstun
The Effects of Integrating Peer Feedback into University-Level ESL Writing Curriculum: A
Comparative Study in a Saudi Context
Grami Mohammad Ali Grami
Newcastle University
School of Education, Communication and Language Sciences
June 2010
Table of Contents
Abstract ………………………………………………………………..………………………………………………………… I
List of Abbreviations ………………………………………………………………………………………………………. II
List of Tables ………………………………………………………………………………………………………………….. III
List of Graphs ………………………………………………………………………………………………………………… IV
CHAPTER ONE: INTRODUCTION………………………………………………………………………………………. 1
1.1 Introduction …………………………………………………………………………………………………………….. 1
1.2 Rationale of the Study ……………………………………………………………………………………………. 2
Contribution to Present Research………………………………………………………………………….. 2
Limitations of Previous Research…………………………………………………………………………….. 3
1.3 Aims and Objectives ……………………………………………………………………………………………….. 4
1.4 General Interest of the Study ……………………………………………………………………………………. 5
1.5 Organisation of the Thesis ……………………………………………………………………………………….. 5
CHAPTER TWO: LITERATURE REVIEW ……………………………………………………………………………… 8
Overview of Chapter Two ……………………………………………………………………………………………….. 8
2.1.1 The Nature of Writing …………………………………………………………………………………………… 8
2.1.2 ESL Writing …………………………………………………………………………………………………………… 10
2.1.3 General Review of the Teaching Context in SA …………………………………………………….. 14
2.1.4 Learner’s Problems in the Saudi Context ……………………………………………………………… 17
Socio-cultural …………………………………………………………………………………………………… 17
Linguistic/pedagogical …………………………………………………………………………………….. 18
Legislative and administrative policy problems…………………………………………………. 19
2.2.1 Writing Approaches …………………………………………………………………………………………….. 20
The Product Approach ……………………………………………………………………………………… 21
The Process Approach ………………………………………………………………………………………. 24
The Genre Approach …………………………………………………………………………………………. 27
2.2.2 Feedback in Writing Classes …………………………………………………………………………………. 30
An Overview of Feedback in Writing…………………………………………………………………. 30
The Significance of Feedback………………………………………………………………………………. 30
Teacher-Written Feedback ………………………………………………………………………….……… 32
Peer Feedback ……………………………………………………………………………………………………. 35
Advantages and Disadvantages of Peer Feedback ……………………………………………… 36
Other Types of Feedback ……………………………………………………………………………………. 40
2.2.3 Introducing Peer Feedback to ESL Students …………………………………………………………… 40
2.2.4 Students’ Beliefs in Writing ……………………………………………………………………………………. 45
2.2.5 Writing Assessment ………………………………………………………………………………………………. 46
Assessment and Feedback …………………………………………………………………………………. 46
Electronic and online Means of Writing Assessment ………………………………………….. 47
2.3.1 Collaborative Learning …………………………………………………………………………………………… 50
2.3.2 Collaborative Writing ……………………………………………………………………………………………… 53
CHAPTER THREE: METHODOLOGY………………………..…………………………………………………………. 56
Overview of Chapter Three ………………………………………………..…………………………………………… 56
3.1.1 Research Gap and Research Questions ………………..……………………………………………….. 56
Research Gap ……………………………………………………………………………………………………. 56
Research Questions …………………………………………………………………………………………… 58
Research Sub-Questions …………………………………………………………………………………… 58
3.1.2 The Context of the Study ……………………………………………………………………………………….. 60
General Educational Background: EFL in the Saudi Context ……………………………….. 60
ESL in the Department of Foreign Languages, KAAU ………………………………………….. 60
3.1.3 Participants of the Study ……………………………………………………………………………………….. 61
3.2 Justification for Choosing Data Collection Tools ……………………………………………………… 64
3.2.1 Procedures of the Questionnaires ………………………………………………………………………….. 64
The Design and Development Stage: Points to Consider …………………………………….. 66
The Development of the Non-Standardised Questionnaire …………………………………. 71
The Pre-Pilot Study …………………………………………………………………………………………….. 72
The Pilot Study …………………………………………………………………………………………………… 74
3.2.2 The Writing Entry and Exit Tests …………………………………………………………………………….. 80
3.2.3 Interviews …………………………………………………………………………………………………………….… 81
Reflections on the Interviews …………………………………………………………………………… 82
3.2.4 Fieldwork and Empirical Study ………………………………………………………………………………. 84
Quasi-Experiment: Control Group and Experiment Group …………………………………. 84
The Design of the Writing Task ………………………………………………………………………….. 85
Peer Feedback Group Training ………………………………………………………………………….. 85
3.2.5 Methodological Issues …………………………………………………………………………………………… 86
Research Ethics ………………………………………………………………………………………………….. 86
Formal Procedures to Conduct the Empirical Study ……………………………………………. 87
Validity and Reliability ………………………………………………………………………………………. 88
Content Validity ………………………………………………………………………………………………… 89
Population Validity …………………………………………………………………………………………….. 89
Rating Written Tests ………………………………………………………………………………………….. 90
Triangulation …………………………………………………………………………………………………….. 91
3.3.1 Data Collection Procedures …………………………………………………………………………………… 91
Writing Tasks: Entry and Exit Tests ……………………………………………………………………. 95
The Treatment of Peer Feedback Group …………………………………………………………….. 97
Pre and Post-Experiment Questionnaires …………………………………………………………… 98
Treatment Group Interview ……………………………………………………………………………….. 100
3.3.2 Data Processing and Analysis …………………………………………………………………………………. 101
Writing Tasks Analysis ……………………………………………………………………………………….. 101
Questionnaires …………………………………………………………………………………………………… 103
Interviews ………………………………………………………………………………………………………….. 103
CHAPTER FOUR: RESULTS ………………………………………………………………………………………….……. 108
Overview of Chapter Four ………………………………………………………………………………………………. 108
4.1 Writing Tests Results ……………………………………………………………………………………………….. 108
4.1.1 Entry Test Results ………………………………………………………………………………………………….. 108
4.1.2 Exit Test …………………………………………………….…………………………………………………………… 112
4.2 Questionnaire Results ……………..……………………………………………………………………………….. 115
4.2.1 The Pre Experiment Questionnaire ……………….……………………………………………………….. 115
4.2.2 The Post-Experiment Questionnaire …………………………………………………………………….… 120
4.3 Results of the Interviews …………………………………………………………………………………………… 121
CHAPTER FIVE: DISCUSSION …………………………………………………………………………………………… 123
Overview of Chapter Five ……………………………………………………………………………………………….. 123
5.1 Students’ Perception on Different Types of Feedback ………………………………………………. 124
5.2 How Peer Feedback Helps Students Improve Writing Skills ………………………………………. 130
5.3 Students Experience in the Peer Feedback Group …………………………………………………….. 141
5.4 Shift of Attitudes towards Teacher and Peer Feedback ………………………………………………143
CHAPTER SIX: CONCLUSION ……………………………………………………………………………………………. 150
Overview of Chapter Six …………………………………………………………………………………………………. 150
6.1 Summary of the Study ………………………………………………………………………………………………. 150
6.2 Implications for Teaching …………………………………………………………………………………………. 152
6.3 Limitations of the Study ……………………………………………………………………………………………. 153
Methods ……………………………………………………………………………………………………………. 153
Time Factor ……………………………………………………………………………………………………….. 154
Access to Participants ………………………………………………………………………………………… 154
Scope of the Research ……………………………………………………………………………………….. 154
6.4 Recommendations for Future Research ………………………………………………………………….… 155
6.5 Self-Reflection …………………………………………………………………………………………………………… 157
References ……………………………….…………………………………………………………………………………….. 158
Appendices …………………………………………………………………………………………………………………….. 171
Appendix A ………………………………………….…………………………………………………………….. 171
Appendix B ………………………………………..……………………………………………………………….. 173
Appendix C ……………………………………..………………………………………………………………….. 174
Appendix D ………………………………………………………………………………………………………… 176
Appendix E ………………………………………………………………………………………………………… 179
Appendix F ………………………………………………………………………………………………………… 182
Appendix G ………………………………………………………………………………………………………… 183
Appendix H ………………………………………………………………………………………………………… 184
Appendix I …………………………………………………………………………………………………………. 197
Appendix J …………………………………………………………………………………………………………. 206
Appendix K ………………………………………………………………………………………………………… 213
Appendix L …………………………………………………………………………………………………………. 214
Appendix M ………………………………………………………………………………………………………. 215
Glossary ………………………………………………………………………………………………………………………….. 229
I
ABSTRACT
This project aims to investigate the effects of introducing peer feedback to a group of
university-level students in a context where teacher-fronted classes are considered
predominant. I performed a three-phased, three-month long project using various data
collection methods. The study first investigated students’ initial perceptions of peer
feedback and compared them to their perceptions after the experiment using semi-
structured questionnaires and individual interviews. The results of the first stage suggested
that students approved of teacher-written feedback, but were apprehensive about peer
feedback. The main objection to peer feedback was the fact that it was originated from
fellow students whose linguistic level was lower than that of the teachers. The second phase
of the project included members of an ESL class divided into two groups; the experimental
group, which jointly used teacher-written and peer feedback; and the control group, which
received only teacher-written feedback. Despite linguistic concerns, the overall perception
of peer feedback became more positive and students subsequently accepted this technique
as part of their ESL writing curriculum. The results suggest that peer feedback helped
students gain new skills and improved existing ones. The last phase was a comparative study
consisting of pre- and post-tests to measure the progress of students’ writing. Texts were
evaluated and given an overall grade based on various local and global issues, using a
holistic assessment approach. Students in both groups did considerably better in the exit
test. However, members of the peer feedback group outperformed the other group in every
aspect of writing investigated. The study concludes that the effect of peer feedback on
students’ perception was profound. Students were hugely impressed by the potential of
peer session on their ESL writing routines which has been reflected on their eagerness to
have more similar sessions in the future. If students are properly trained to use peer
feedback, the benefits could be very significant, and therefore it recommends that
education policy makers and ESL writing teachers in Saudi Arabia should do more effort to
introduce peer session to all ESL writing classes.
II
LIST OF APPREVIATIONS
L1 First Language
L2 Second Language
ESL English as a Second Language
PF Peer Feedback
CLT Communicative Language Teaching
KAAU King Abdul Aziz University
χ
2
Chi-square
P-Value Probability, margin of error ranging from 0.00 to 1.00
SD Standard Deviation
SPSS Statistical Package for Social Sciences (software)
III
LIST OF TABLES
Table (1.1) IELTS Test Results 2008 ……………………………………………………………………………………………… 18
Table (1.2) The Main Features of Genre Approach ……………………………………………………………………… 28
Table (1.3) Feedback Methods …………………………………………………………………………………………………….. 33
Table (1.4) Recent Feedback Studies ………………….……………………………………………………………………….. 44
Table (1.5) Features to be considered in assessing writing ability ……………………………………………….. 46
Table (1.6) Traditional and Experiential Models of Education …………………………………………………….. 52
Table (2.1) Participants of the Study …………………………………………………………………………………………… 62
Table (2.2) Factsheet about Participants of the Pilot Study …………………………………………………………. 76
Table (2.3) Analysis of Texts …………….…………………………………………………………………………………………. 102
Table (4.1) Local Errors in the Entry Test ………………………………………………………………………………….….. 110
Table (4.2) Errors per 100 Words (Entry Test) ……………………………………………………………………….…….. 110
Table (4.3) Treatment Group Results (Peer Feedback Group) ………………………………………………….….. 113
Table (4.4) Errors per 100 Words PF Group Exit Test …………………………………………………………….…….. 113
Table (4.5) The Control Group Results (Teacher’s Feedback Only) ………………………………………………. 113
Table (4.6) Errors per 100 Words Control Group Exit Test …………………………………………………………… 114
Table (4.7) Number of Clause Relations in Texts by Treatment Group ……………………………………….. 114
Table (4.8) Number of Clause Relations in Texts by Control Group …………………………………………….. 114
Table (4.9) Students Beliefs of Teachers’ Comments ………………………………………….……………….…….. 115
Table (4.10) Students Beliefs regarding Autonomous Learning …………………………………………….……. 116
Table (4.11) Number of Previous Writing Courses*Students’ Beliefs………..………………………………… 117
Table (4.12) Chi-square ……………………………..……………………………………………………………………………….. 117
Tables (4.13, 4.14) Chi-Square Results ………………………………………………………………………………………… 119
IV
LIST OF GRAPHS
Graph (1.1) An Example of a Product Approach Exercise ………………………………………………… 22
Graph (3.1) Data Collection Stages …………………………………………………………………………………. 92
Graph (3.2) A Node Example in nVivo ……………………………………………………………………………… 104
Graph (4.1) Histogram Chart of Texts’ Length ……………………….……………………………………….. 109
Graph (4.2) Overall Scores of the Entry Test …………………………………………………………………… 111
Graph (4.3) Beliefs regarding the Importance of Teachers’ Comments …………………………. 115
Graph (4.4) Attitudes towards Autonomous Learning …………………………………………………….. 116
Graph (4.5) Perception of Peer Feedback ………………………………………………………………………. 117
Graph (4.6) Students Preferences towards Feedback Attitudes …………………………………….. 119
1
CHAPTER ONE: INTRODUCTION
1.1 Introduction
Peer feedback can be a very useful collaborative activity in ESL writing classes.
Unfortunately, this type of feedback is rare in many non-Western teaching contexts,
where teacher-fronted classes remain dominant, despite the benefits reported in the
literature. Generally speaking, feedback in writing is a wide concept which can be
understood in its broader sense as any type of communication students receive in
order to provide information about their written tasks. Feedback nevertheless is not
limited to assessing students’ written work; more importantly, feedback in its
formative guise is an essential component in the ongoing process of learning how to
write, or how to acquire any other language skill for that matter, and hence plays an
immensely important role in writing development. However, the discussion in this
project is restricted to feedback in teaching writing only, and the term “feedback”
will henceforth be confined to this concept (Mendonça and Johnson, 1994; Ashwell,
2000; Hyland, 2001; and Ferris, 2002).
The research project introduces this relatively new concept to a Saudi institution.
The Saudi educational system in general has been seen as a context where more
traditional approaches to language learning are prevalent. (Bersamina, 2009;
Almusa, 2003; Al-Hazmi, 2003; Al-Awad, 2002; Asiri, 1996) The study is also in
keeping with tradition of studies that investigate whether training students to adopt
new concepts in ESL writing could be successful, including Al-Hazmi and Scholfield
(2008), Min (2006) and Miao et al. (2006).
2
A three-phased study was conducted in the English department of a Saudi university
involving ESL writing students from two classes to investigate the effect of
incorporating peer feedback sessions into their usual curriculum. By using different
approaches of data collection, the study investigates how students’ perceive peer
and teacher-written feedback, how different treatments affect their actual writings,
and if their opinions would change following different treatments. The study also
investigates if peer feedback can improve the writing skills and products of students
who give and receive additional peer feedback sessions in addition to their existing
usual intake of teacher-written feedback.
1.2 Rationale of the Study
Contribution to Present Research
The study investigates if peer feedback has an effect on students’ beliefs and
performances using a multistage data collection approach, which employs both
quantitative and qualitative measures. Representative members from Saudi
university context were asked first about their beliefs towards a range of issues all
related to feedback in an ESL writing session, including towards teacher and peer
feedback. From a wider perspective, however, the study also investigates the
different beliefs of ESL students at the university level regarding, in addition to
different feedback techniques, their preferences of the type of comments they
receive, what sort of errors they are concerned about in writing, what areas they
would like to improve (local versus global), what attitudes comments should take
(praise, criticism or a combination of both) and the directness of feedback. The
research also uses a comparative study to measure the effects of training a group of
3
students to adopt these different learning techniques usually associated with peer
feedback sessions, as opposed to another group whose members are exposed only
to teacher feedback to measure if the performance will differ as a result of the type
of feedback students received. The last of the data collection methods used are
semi-structured, individual interviews with selected members of the experiment
group, which act mainly as a complement to the findings of the post-experiment
questionnaire, as well as giving an in-depth insight into students’ responses. Very
little research investigates if training students to use peer feedback in their ESL
writing classes would change their perceptions not only about peer feedback but
other feedback types including teacher-written. Similarly, the combined use of
different methods might not be new in previous studies but the way and timing in
which they were carried out surely is. In other words, most previous studies that
jointly use questionnaires and interviews use them at the end of the experiment
while in this study there have been three data collection stages, before, within and
after the experiment.
Limitations of Previous Research
The current literature, discussed in more details in chapter two, indicates a research
gap in two aspects; firstly, although the topic of feedback and the comparison
between various feedback techniques in ESL/EFL writing classes is not a new area of
research, I am aware of only three recent studies that compare the effects of peer
feedback on writing to those of teacher’s written-feedback, two of which were
conducted in different teaching contexts. These studies are Min (2006), whose
respondents were drawn from a university in South Taiwan; Miao, Badger and Zen
4
(2006), involving Chinese students; and a recently-published paper by Al-Hazmi and
Scholfield (2008), which is in some ways similar to this research topic in terms of
topic and the research population. The latter includes two treatment groups: peer
feedback with checklists, and checklists only; and secondly, the literature review,
which without doubt proves the rarity of educational studies carried out in the Saudi
context not only in ESL writing classes but in general. Moreover, none of these or
other studies compared students’ beliefs about peer feedback and teacher written-
feedback before and after training students to use peer feedback sessions which is
one of the theories the study investigates. More discussion regarding the research
gap is presented in section 3.1 in the methodology chapter.
1.3 Aims and Objectives
The overall aim of the project is to evaluate the success of integrating peer feedback
into ESL writing classes in terms of developing writing and social skills, and to
investigate if training students to use peer feedback would change their perceptions
of peer and teacher-written feedback techniques. The specific objectives are:
· To measure students’ preferences for different feedback techniques before and
after the peer sessions experiment.
· To divide an ESL writing class into a treatment group, which is trained to use peer
feedback in addition to teacher-written feedback; and a control group, which
receives only teacher-written feedback.
· To prepare the treatment group for peer feedback sessions including training
students to act as evaluators (givers) and receivers of feedback, as well as to use
the checklist provided by the teacher.
· To evaluate and compare students’ writing before and after the experiment by
means of entry and exit tests, including members of both the treatment and
control groups.
· To provide detailed evaluation reports to all participating texts as part of the
assessment process.
5
· Once written tasks are completed and assessed, it will be ascertained whether
the students in the treatment group (peer feedback) would have different
perceptions of different feedback techniques.
· To find out if peer feedback sessions helped students improve their writing skills
using comparisons with the other group.
· To find out if peer feedback helped to develop social, cognitive, affective and
metalinguistic skills.
· To deduce implications for ESL writing teaching based on the findings of the
research.
1.4 General Interest of the Study
Writing has been described as a complex process for the L1 learner, not to mention
ESL student writers who struggle with their linguistic problems and has to deal with
it in addition to other requirements. (Leki & Carson, 1997; Hinkel, 2004; and Ferris &
Hedgecock, 2005) Difficulties in writing are no exception to Saudi university-level ESL
students. IELTS data (see table 1.1) show that the lowest mean score Saudi students
received is in their writing. From my personal experience as a teacher in Saudi
Arabia, I have noticed that writing is indeed a problematic area for most students,
even those whose major is English, and who therefore could be expected to do
reasonably well. Many factors could have affected students’ performance in writing,
but for the interest of this study I was more concerned about how students received
comments about their texts, and how such feedback could have shaped their
performance and beliefs. To assess students’ progress with more precision than is
usually possible using qualitative measures alone, a quantitative tool was also
included in the form of two evaluated written tests. More detailed analysis about
these issues is available in the literature review chapter.
6
1.5 Organisation of the Thesis
This thesis is arranged in the following six chapters:
Chapter One: Introduction
The current chapter which includes an introduction, a rationale of the study and the
general interest of the study.
Chapter Two: Literature Review
Which, as the name suggests, is a review of the various issues related to the topic of
the study. The basic issues to be covered in this chapter are: ESL writing, teaching
English in the Saudi context, different approaches to teaching writing, collaborative
learning and writing and different techniques of feedback in writing classes.
Chapter Three: Methodology
The design and method of this study are presented in this chapter. It will provide
information about the procedures of data collection, the subjects, the materials used
to assess students writing, and statistical tests used to for the analyses. The
proposed research question is also presented in chapter three.
Chapter Four: Results
Chapter four deals with the quantitative data obtained from the questionnaires and
the writing tests as well the qualitative data obtained from the interviews and open
ended questions in the questionnaires.
7
Chapter Five: Discussion
This chapter covers the findings of the previous chapter and relates them to previous
studies. The attention then moves to the research questions and I try to address
them according to the findings.
Chapter Six: Conclusion
This chapter contains a summary of the research undertaken, its implications for
teaching ESL writing. Limitations to this study, suggestions for future research and
self-reflection will also be presented in chapter six.
8
CHAPTER TWO: LITERATURE REVIEW
Overview of Chapter Two
The aim of this chapter is to look at the theoretical concepts underlining feedback,
which is the common practice of responding to students’ writing, including different
writing approaches, and their effects on the process of providing feedback, as well as
the effects of L2 writing on ESL students’ perceptions of the feedback. The chapter is
divided into three main parts. The first part looks at the general issues related to the
topic, which are the nature of writing, ESL writing, and ESL student writers, and
teaching English in general and writing in particular in the context of the study. The
second part deals with different writing approaches and how they affect different
feedback techniques, in addition to writing assessment and evaluation. Finally, the
last part looks at the issues of collaborative learning and writing, as they also provide
a theoretical framework in which peer feedback operates. Subsequent to this
comprehensive review of the relevant literature, attention is paid to identifying work
that still needs to be done, namely the research gap (see ‘research question’ in the
following chapter).
Part One: The Nature of Writing, ESL Writing and Teaching English in Saudi Arabia
2.1.1 The Nature of Writing
Based on the natural order hypothesis, writing is generally considered to be the
language skill obtained last, but nevertheless it is as important as the rest. The skill of
writing is especially important in academic settings where most ESL teaching occurs.
However, many researchers and scholars notice that despite writing being a very
9
important form of expression and communication, teaching it tends to be a much-
neglected part of the language programme in both first and foreign languages
(Dempsey et al., 2009; Badger & White, 2000; White & Arndt, 1991; Bailey et al.,
1974). Writing has also been described by many researchers as a ‘complicated
cognitive task’, because it is an activity that demands careful thought, discipline, and
concentration, and it is not just a simple direct production of what the brain knows
or can do at a particular moment. (Widdowson, 1983; Smith, 1989; White, 1987)
Writing thus appears to be a challenging task, and researchers such as Widdowson
(1983) believe that most of us seem to have difficulty in setting our thoughts down
on paper.
This difficulty increases if English is not the writer’s first language, hence learning to
write in English when it is a writer’s second or a third language poses its own
additional problems. Hopkins (1989) mentions that for most non-native learners,
writing is considered to be the most difficult skill to learn. Moreover, the task of
writing in a second language is particularly severe when students are required to
produce a high-quality outcome, as is the case in academic settings (McDonough &
Shaw, 2003; Hopkins, 1989; Widdowson, 1983).
From a pedagogical perspective, different teaching methods have significant effects
in developing students’ skills in writing. For instance, Piper (1989) pointed out that
instruction has an effect on how learners write, both in terms of written output,
writing behaviours, and attitudes to writing. Different approaches have been
adopted to teach writing in ESL/EFL classes. In Saudi Arabia (the target context of the
10
study), as in many other places in the world, the dominant approaches used in
different teaching organisations are, arranged according to their popularity, the
product, process, and genre approaches. These approaches have obvious local
variations in the way implemented in the West, and with more reliance on
‘traditional’ ways of teaching, as discussed in later sections (see Bersamina, 2009;
Almusa, 2003; Al-Hazmi, 2003; Al-Awad, 2002; Asiri, 1996). Descriptions of writing
approaches, their advantages and disadvantages, and the role of feedback in relation
to different writing approaches will be included.
2.1.2 ESL Writing
It has already been established that learning to write in English as second or a
foreign language can be quite different from writing as a native speaker and in many
occasions even problematic. In fact, the literature of ESL writing, as Ferris and
Hedgcock (2005); Hinkel (2004); and Zhang (1995) report, draws attention to various
and significant differences between L1 and L2 teaching contexts, which can generally
be attributed to the distinctive social and pedagogical features of each, in addition to
differences in linguistic competence and literacy skills of the students. For instance,
Leki and Carson (1997) believe that ESL writers experience writing differently from
their L1 counterparts. In fact, most non-native students (NNS), according to Hinkel
(2004), experience a great deal of difficulty, and even highly advanced and trained
NNS students exhibit numerous problems and shortfalls. Hinkel (2004) believes that
teaching ESL writing to NNS college- and university-level students is usually
academically bound. If NNS students are to succeed in attaining good grades and
achieving their educational objectives, the accuracy of their L2 writing needs to be
11
approximate to NS students at a similar academic level. To put this difference into
perspective, Johns (1997), found that many NNS students after years of ESL training
often fail to recognise and appropriately use the conventions and features of
academic written prose. These students were reportedly producing vague and
confusing, rhetorically unstructured, and overly-personal written texts. From an
academic point of view, Thompson (1999), whose study in addition to that of
Dudley-Evan (1999) was described by Paltridge (2002) as the only ones that looked
at academic writing at a doctoral level, highlights this issue of increased number of
international students who are expected to write theses in English. Thompson (ibid)
therefore calls for more work to be done to establish the characteristics of the genre
they are required to write.
Similarly, Ferris (2002) conducted a study which found that L2 students are
particularly concerned about their surface-level errors rather than more global issues
such as logic, rhetoric and ideas. This particular finding goes along with the widely-
held belief that responding to L2 students’ writing has been of great significance in
teaching writing, and is well considered by both writing teachers and pedagogy
theorists alike. In order to explain why NNS students might focus more on local
issues, Hinkel (2004) mentions that their writing lacks basic sentence-level features
such as the proper use of hedging, modal verbs, pronouns, active and passive voice,
balanced generalisations and exemplifications. Hinkel therefore believes that NNS
are more concerned about these errors than their NS counterparts which in practice
means they focus more on grammatical errors than wider global issues. As a possible
negative outcome of this view of NNS students lacking overall language proficiency,
12
especially writing skills, many NNS students may experience frustration and
alienation, which compounds their existing problems. Bearing this mind, Ferris
(2002) describes giving grammar feedback to such students as ‘indispensable,’
contrary to recommendations made by Truscott (1996, 2004 & 2007), who called for
a complete ban of this type of feedback. Hyland and Hyland (2001) take a similar
stance to Ferris, as they argue that providing written feedback to language students
is one of the ESL writing teacher’s most important practices. ESL student participants
in Hyland and Hyland’s study were reported to overwhelmingly desire the correction
of their linguistic and logical errors, and they added that it is teacher’s responsibility
to provide such feedback, in other words, teachers should equally focus on both
types of errors. Ferris (2002) gives a possible explanation of such attitudes, noting
that L2 writers are constantly aware of their linguistic limitations, and thus are more
likely to focus on word- or sentence-level accuracy, instead of more global issues
(see above). The very notion of L2 students’ preference of form feedback is further
supported by Ellis et al. (2008), Bitchener (2008), Ashwell (2000), Hedgcock and
Lefkowitz (1996), and others, who report that foreign language students exhibit
positive attitudes to feedback that are distinctly form-focused. The aforementioned
studies, moreover, report that most ESL students value and expect feedback
concerning their linguistic errors. Hyland (2003: 178) clearly expresses this particular
idea:
Teacher-written response continues to play a central role in most L2 writing
classes. Many teachers do not feel that they have done justice to students’ efforts
until they have written substantial comments on their papers, justifying the grade
they have given and providing a reader reaction. Similarly, many students see
their teacher’s feedback as crucial to their improvement as writers.
13
For instance, when responding to the strong views against giving grammar feedback,
especially those expressed by Truscott (1996, 2004 & 2007), Ferris and Hedgcock
(1998: 139) note that “In fact, given the strong preferences that L2 writers have
expressed for receiving grammar feedback, its complete absence may actually be
upsetting and demotivating.”
As for ESL writing teachers’ position, recent research (e.g. Ferris & Hedgcock, 2005;
Ferris, 2002; Hyland & Hyland, 2001) also shows that teachers are very much
concerned with students’ surface-level errors themselves. This focus on linguistic
accuracy probably originated from L2 students’ linguistic incompetence (see above),
but other pedagogical and social influences may still play a significant role. Another
explanation for teachers’ attitudes is provided by Hyland (2003) and Zamel (1985),
the latter of whom notes that ESL writing teachers perceive themselves more as
language teachers, rather than writing teachers. Similarly, Kepner (1991) refers to
the traditional view of achievement in L2 writing as mastery of the discrete surface
skills required for the production of an accurately-written document. In short, there
is plenty of research evidence showing that ESL students crave surface-level
correction, and believe in its effectiveness (Lee, 1997; Leki, 1991; Hendrickson,
1978). Ferris and Hedgcock (1998) note that ESL students have been reported to
prefer content feedback on early drafts, and form feedback on later ones, a
proposition that copes with the relatively contemporary ‘process approach’ of
writing.
14
It can be concluded that previous research findings clearly demonstrate that ESL
students want, appreciate, and apply the corrections they get from their teachers
(Zamel, 1985; Hyland & Hyland, 2001; Hyland, 1998; Ferris & Hedgcock, 1998; Ferris
& Roberts, 2001; Hinkel, 2004; Cohen, 1987; Leki, 1991). In short, ESL teachers feel
obliged to correct writing errors, and students want them to do so. Moreover, as L1
student writers usually have significantly less limitations in their linguistic
competence, NS writers can focus on more theoretical, notional, abstract ideas. This
is, on the contrary, not the case with NNS learners, who are still struggling with their
lower-language proficiency, and concerns regarding linguistic errors therefore still
occupy prominent status, as compared to their NS counterparts (Hyland & Hyland,
2001; Reid, 2000; Ferris & Hedgcock, 1998; Leki & Carson, 1997; Kepner, 1991;
Radecki & Swales, 1988).
2.1.3 General Review of the Teaching Context in Saudi Arabia
This section examines broader aspects of the Saudi educational context and their
impact on ESL classroom. A more focused section addressing learners’ problems in
KSA, in addition to a more specific description of teaching English and English writing
in Saudi Arabia (especially in the Department of European Languages (KAAU), where
the empirical study took place), is included in the methodology chapter. This section
investigates cultural, social, pedagogical, and other aspects of Saudi society and
educational system that contribute to English teaching in Saudi Arabia.
It is essential to study the various components of the educational context in order to
properly understand it, bearing in mind that the learning environment context does
not exist in a vacuum, and surrounding environmental, social, and cultural influences
15
have an effect. Not adequately considering all of these dimensions might negatively
affect perceptions of the situation and inhibit the tenability of plans and strategies
devised for the situation. In order to understand the problems of Saudi learners, it is
reasonable to first understand the Saudi wider educational context as a whole. After
all, many researchers say that it is important to understand the whole in order to
understand a part, by seeing other pieces of evidence that might affect this specific
part (e.g. Holloway & Jefferson, 2000). In this section, an introduction to the Saudi
educational context is offered, as represented by Western researchers and
expatriate teachers, despite the fact that available resources including similar
studies in the Saudi context and publications by the ministry of education, are
indeed very scarce, and by inspecting the work of some local researchers or
researchers from Saudi Arabia conducting studies overseas.
To elaborate upon the importance of context, Bruthiaux (2002) and Holliday (1994)
both agree that simply ‘knowing’ about a particular culture to understand an
educational context is not enough. Educators and researchers need to perceive and
comprehend the culture of the classroom itself as unit, and the whole surrounding
context as a whole. Holliday states (1994: 161):
…it is not possible to generalise about the precise nature of a particular
classroom culture, or the other cultures which influence it, or the form
which this influence takes. This means that the process of learning about
these things is not a matter just for theorists and university researchers—
not something that teachers can get from the literature. It is something that
has to be worked through in the situation in which teaching and learning
have to take place.
Bearing in mind the previous argument, some Western researchers, scholars, and
expatriate teachers (including McKay, 1992; Gray, 2000; Whitefield & Pollard, 1998)
16
took a deeply critical stance regarding the educational context in Saudi Arabia by
describing it as a rigid, deeply religious one, where tradition plays a very dominant
position in every aspect of life, including education and educational policies.
According to them, the interference of religion is manifested in the ‘segregation’
between male and female students, as well as in the process of selecting suitable
classroom materials, which are, according to them, not based on students’ needs as
much as their conformity to strict religious mores. For example, McKay (1992)
mentions that topics containing themes of relationships other than family and
friendship are quickly deleted from textbooks for the sake of not alienating the
students. She goes further and claims that any reference to music will soon be
removed from textbooks in accordance with the rulings of the dominant religious
sect in Saudi Arabia. Moreover, Gray (2000) claims that Saudi Arabia has gone to the
‘extreme’ of producing English educational materials with almost no reference to
English-speaking cultures. Another concern here is the fact that pre-communicative
era practices, comprised of content-focused, teacher-dependent learning styles, are
still dominant in public schools (Whitfield & Pollard, 1998). This view, although
shared by some other researchers, depicts a negative picture of a closed society
implementing very strict rules, but it is the view of outsiders looking in, and it
therefore does not take account of the voice Saudis themselves. These criticisms are
usually based on the short ethnographic experiences of these expatriate researchers,
and are usually accompanied with predetermined stereotypical concepts, possibly
derived from reading accounts written by the same source (i.e. other expatriates).
The complex sociological construct of the Saudi society makes policy decisions taken
by the government not only acceptable by the majority of Saudi people, but also
17
recommended, as reported in Aleid (2000). If we consider the date in which McKay
(1992) publishes her recommendation, it becomes almost evident that little change
has been achieved since.
McKay (1992) claims that one negative trait of Saudi students is their heavy reliance
on personal relationships. Although such a trait seems to be out of the classroom
context, and is rather a completely social dimension of the Saudi culture, it actually
has influence on students’ educational progress. She mentions that an expatriate
teacher in Saudi Arabia named Joy claimed that the amount of homework she could
assign her students was severely affected by the fact that students devote a good
deal of time to visiting friends and relatives, resulting in less time for homework,
which criticism could only be valid when associated with the type of Saudi students
Joy dealt with. It is however difficult to make valid assumptions from these few
accounts but they can be indicators of the teaching problems there.
2.1.4 Learners’ Problems in the Saudi Context
The reported problems of ESL/EFL in the Saudi context are divided into three main
categories: 1) socio-cultural problems; 2) linguistic and pedagogical problems; and 3)
legislative and administrative policy problems. Again, it must be stressed that getting
enough information about this particular context was a challenging task; many of the
references cited were unpublished theses, which were collected from two British
universities visited during this research. The table below is taken straight from
Cambridge ESOL notes and shows the scale of the problem.
18
Table (1.1) IELTS Test Performance 2008 (From Cambridge ESOL: Research Notes, Issue 36 / May 2009))
Socio-cultural problems
These include a tendency towards teacher-centred approaches (although this
particular problem can overlap other with problems of an administrative nature),
overreliance on teachers as the main and sometimes the only reliable source of
knowledge, and students’ heavy reliance on personal contacts and mitigating
circumstances to justify their low performance, even in strict professional and
educational settings, a problem that McKay (1992) explicitly cited in her account of
the Saudi context, as mentioned previously. Moreover, a very conspicuous problem
is insufficient opportunities for average Saudi learners to use English in an authentic
situation. Syed (2003) noted that local learners see no concrete links between
English language ability and their communicative needs, and teachers doubt if their
students use English beyond the classroom in any meaningful communication.
Failure to perceive communicative aspects of English leads to other problems,
including students’ lack of motivation, a problem that has been described as serious
by Al-Eid (2000) and Al-Malki (1996), and subsequently failure in basic
communicative skills, as Syed (2003) concluded. The last two problems may also
interfere with the following category of problems. However, as far as ESL writing is
concerned, the available data shows that there are serious problems with Saudi
19
students’ writing. The IELTS test performance of 2008 for instance shows that Saudi
students scored the lowest average mark in writing (4.83 out of possible 9)
compared to other language skills (5.17, 4.97, 5.81 in listening, reading and speaking
respectively).
Linguistic/pedagogical problems
These problems interrelate with the other two categories, and include factors such
as students’ underachievement in the classroom, low English proficiency levels,
particularly in their L2 writing. Other studies report that the educational system is
more of a top-down approach with audio-lingual and memorisation regarded
common practices in the classroom. As far as L2 writing is concerned, Saudi students’
poor ESL writing has been widely reported in studies including Bersamina (2009) and
Al-Eid (2000). A point already established by looking at table (2.2) in the
methodology chapter which shows a tendency to score less in writing than other
skills and subsequently the overall score. This finding in fact goes perfectly in line
with the results of IELTS Test Performance 2008 shown in table (1.1) above which
shows a mean score of writing for Saudi students lower than other skills. Other
problems include reliance on rote learning and memorisation, and outdated
curricula and methodologies (Bersamina, 2009; Syed, 2003; Khuwaileh & Shoumali,
2000; Al-Eid, 2000).
Legislative and administrative policy problems
These can include insufficient support systems, a lack of qualified English teachers,
and not having proper teacher training programmes, as mentioned in Bersamina
20
(2009), Al-Hazmi (2003) and Al-Awad (2002). It has already been discussed that
dependence on high-stakes testing and the predominance of traditional teaching
approaches are not uncommon in this context, all of which can be attributed to
current educational policies. A lack of sufficient qualified teachers is still a serious
problem, despite the government’s efforts to recruit more expatriate teachers. For
example, according to Al-Hazmi (2003), more than 1,300 non-Saudi teachers were
recruited in 2001 alone, Bersamina (2009) also mentions that the majority of whom
come from neighbouring countries like Egypt, Jordan and Sudan. However, there are
socio-cultural and pedagogical issues involved with expatriate teachers, because
even if expatriate teachers may use ‘contextually-situated pedagogy’, their limited
knowledge of socio-cultural communities and languages could subsequently create a
linguistic and cultural barrier between them and their learners. Another problem
associated with contracted expatriate teachers is that they are less motivated to
actively engage with existing systems, and they have little impetus to innovate or
initiate change (Syed, 2003; Al-Hazmi, 2003; Al-Awad, 2002; Shaw, 1997).
Part Two: Writing Approaches, Feedback in Writing and Writing Assessment
2.2.1 Writing Approaches
Before the discussion moves on to different ESL writing approaches, two important
points need to be clarified. Firstly, the main reason for including this section is to
investigate the relationship between different writing approaches and different
feedback techniques, especially with process and post-process approaches, as
explained below. Secondly, the three as yet unmentioned main approaches are
interrelated and, in many cases, a clear-cut definition of each is very hard to
21
establish. This section however, briefly reviews the most popular writing approaches,
as presented in the relevant literature. They will be discussed seriatim according to
the general chronological order of their appearance. Although some of the following
approaches might have been in the ELT field for a relatively long time, it is still
difficult to brand them as ‘old-fashioned’ or ‘obsolete,’ for the simple reason that
they still play their significant role in many current ELT writing curricula worldwide,
although some writing approaches have gained various levels of prominence at
different times. For instance, Badger and White (2000) and Tribble (1996) mention
that product and process approaches have dominated much EFL teaching writing,
while the genre approach has gained prominence in the last ten years. Another
important point to consider is that each of these approaches has its strengths and
weaknesses, but together they complement each other (Badger & White, 2003;
McDonough & Shaw, 2003; White & Arndt, 1991).
The Product Approach
Many researchers, including Yan (2005), Nunan (1999), and Richards (1990), believe
that this approach is perhaps the most traditional among the widely-used L2 writing
approaches. From a historical perspective, Ferris and Hedgcock (2004), Silva (1990),
Raimes (1983), and Flower and Hays (1980) trace this approach back to the audio-
lingual method of second language teaching that appeared in the 1950’s and early
1960’s, in which writing was used essentially to reinforce oral patterns and to check
learners’ correct application of grammatical rules. Product approaches focus on the
final product of the student writers, thus Richards (1990) mentions that because this
22
approach essentially focuses on the ability to produce correct texts, or “products” it
is hence called “product approach.”
Graph (1.1) A Typical Example of a Product Approach Exercise: “The way to Donald’s house” (Byrne, 1979: 25)
The product approach aims to make learners imitate a model text for the purpose of
producing a correct piece of writing via dependence on the (typical) text given, as
graph (1.1) above demonstrates. (McDonough & Shaw, 2003; Badger & White, 2003)
This approach, according to Pincas (1982) and Badger and White (2000), focuses on
teaching students linguistic knowledge, by which they mean grammatical accuracy,
vocabulary, punctuation, and spelling. For example, students might be asked to
transform a text which is in the past simple into the present simple, or to change the
plural subjects in the model text into singular ones. However, to be more specific,
the main features of this approach can be summarised as follows:
1. Learners have specific writing needs.
2. The goal of a product approach programme is to focus on patterns and
forms of the written text found in educational, institutional, and/or
personal contexts.
3. The rhetorical patterns and grammatical rules are presented in model
compositions that students can follow.
4. Grammatical skills and correct sentence structures are very important.
5. Error treatment can be achieved with the help of writing models.
6. The mechanics of writing such as handwriting, vocabulary use,
capitalization, and spelling are also taught.
23
7. The role of the teacher can be seen as a proof-reader or an editor.
McDonough and Shaw (2003) also mention that the role of the teacher is
to judge the finished work.
(Accumulated from: Yan, 2005; McDonough & Shaw, 2003; Badger & White, 2003; Nunan, 1999;
Richards, 1990; Silva, 1990; Hedge, 1988; and Flower & Hays, 1980).
The product approach is seen to offer many advantages, such as improving learners’
grammatical accuracy, especially with lower-level students, and enhancing learners’
stock of vocabulary (Zamel, 1983; Raims, 1991; McDonough & Shaw, 2003).
Nevertheless, this approach has also been criticised for several reasons. For example,
it does not allow much of a role for the planning of a text, nor for other process skills
(Badger & White, 2000). Moreover, students might become frustrated and de-
motivated when they compare their writing with better models. It has also been
claimed that using the same form regardless of content will have the effect of
“stultifying and inhibiting writers rather than empowering them or liberating them”
(Escholz, 1980: 24). Hairston (1982) also argues that adopting this approach in
teaching will not encourage students to practise writing, because it does not show
them how writing works in real-life situations. He contends that teaching students
the best way to write requires initiating them into a real way (i.e. an authentic
situation where there is a real need for writing texts) to produce correct writing,
which requires more than providing them with a set of rules. With this approach,
feedback either from the teacher or from peers is not possible except on the final
product, i.e. after students have completely finished their written tasks. Finally, Yan
(2005) agrees that product approach ignores the actual process used by students or
any writers to produce a piece of writing. The approach therefore requires constant
error correction, and this practice in turn affects students’ motivation and self-
esteem in the long run.
24
The Process Approach
This approach has generally been regarded as a reaction against product-based
approaches, where the focus has shifted from the final product to the underlying
processes of writing that enable writers to produce written texts. This approach sees
writing primarily as the exercise of linguistic skills and writing development as an
unconscious process that occurs when teachers facilitate the exercise of writing skills
(Badger & White, 2003; Gee, 1997; Uzawa, 1996; Zhang, 1995; and Keh, 1990).
The links between peer feedback and process approach are obvious. Berg (1999),
Zhang (1995) and Keh (1990) for instance believe that peer response is actually part
of the process approach to teaching writing and feedback in its various forms is a
fundamental element of this approach. Many tasks involved in peer review sessions
are in fact applications of the process approach.
From a historical perspective, this approach can be traced back to the late 1970’s,
and specifically to Zamel (1976), following the work of the cognitive psychologists
who proposed a model of the composing processes involved in writing with three
central elements; planning, translating, and reviewing. This approach represents a
shift from the mere analysis of written texts to studies that address writing
processes. It is interesting to note that the process approach has made a huge
impact on writing pedagogy, and since 1980 syllabi and textbooks in many parts of
the world have incorporated this approach as an integral part of teaching (Ivanich,
2004; Gee, 1997; Uzawa, 1996; White & Arndt, 1991; Flower & Hays, 1980)
According to Liu and Hansen (2002) and Zamel (1983), this approach focuses on the
25
composing process, which views writing not as a product-oriented activity, focusing
only on the final product, but rather as a nonlinear, exploratory, and generative
process, whereby writers discover and reformulate their ideas as they attempt to
approximate meaning. This approach gives the opportunity to practise activities
usually referred to as linguistic skills such as pre-writing, brainstorming, drafting, and
editing, with less focus on linguistic knowledge aspects such as grammar (Badger &
White, 2003; Tribble, 1996; White & Arndt, 1991; Hedge, 1988; Raimes, 1985; Zamel,
1983).
The process approach also gives students the opportunity to understand the
importance of the various skills involved in writing, and recognises that what
learners bring to the writing classroom contributes to the development of writing
ability, as Badger and White (2000) assert. According to White and Arndt (1991) and
McDonough and Shaw (2003), there are different main parts formulating the process
writing approach, which are cyclical and interrelated. White and Arndt roughly divide
them into pre-writing and actual writing activities, whereas McDonough and Shaw
divide them into pre-writing, drafting and redrafting, editing, and a pre-final version.
The shortened list of the main process as envisaged by McDonough and Shaw
(2003), Tribble (1996), and White and Arndt (1991) is as follows:
Flow Chart (1.1) The Shortened List of Writing Processes. From Tribble (1996: 39)
The full list, however, usually includes the six following processes: 1) generating
ideas, which is the starting point and possibly the most difficult and inhibiting step;
2) focusing, which means realising the focal idea and viewpoint of the writing, which
26
should be closely connected to the writer’s purpose in writing; 3) structuring, which
means arranging factual and linguistic information; 4) drafting, where attention
moves towards the reader, and the writer starts to think of how best to organise
information and ideas for them, as well as how to attract their attention by means of
referring, directly or indirectly, to openings, and ends with sense of completion; 5)
evaluating, which requires developing criteria for evaluation by looking for
grammatical and rhetorical mistakes; and finally, 6) re-viewing, which comes as the
last stage in process writing, when writers see their text gradually evolving into a
form which is more-or-less final.
This approach, according to Ivanic (2004) and Flower and Hays (1980), has been
praised by teachers and policy makers alike because it contains certain sets of
elements which can be taught explicitly, and because it has an inherent sequence.
However, as with product approach, process approach has been subject to criticism.
Badger and White (2000) believe that it does not give students sufficient input,
particularly in terms of linguistic knowledge, in order to enable them to write
successfully. Horowitz (1986) also believes that using process writing in the
classroom will leave students unprepared for writing exams. He also argues that it
will give them a false perception of how their writing will be evaluated at university
level. Ivanic (2004) moreover mentions that aspects of writing and writing processes
might not be easy to assess, meaning that the assessment will usually be preserved
for the final product. More importantly, the process approach did not differentiate
between text-type, context, and purpose for writing.
27
With regard to feedback techniques, it is important to highlight the relationship
between process writing and feedback in general, and peer feedback in particular, as
this approach enables and even encourages students to work collaboratively in
groups (Hyland & Hyland, 2006; Badger & White, 2000). Liu and Hansen (2002)
similarly recognise the relationship between feedback and process writing, and they
assume that the former supports the latter, especially during the drafting and
revision stages, and hence process writing enables students to get multiple feedback
opportunities (e.g. from teacher, peer and self) across various drafts. This fact should
certainly help to improve students’ following drafts. Cohen (1990) further explains
that the writing process in this approach usually passes through several rounds of
peer editing and self-assessment before it reaches the teacher for assessment,
making this approach a favourable one when training students to use peer feedback.
The Genre Approach
People who share the same profession have a tendency to employ a special language
which is used more or less exclusively by them – the genre approach. Hyland (2007)
mentions that this approach is an outcome of the communicative language teaching
approach which emerged in the 1970’s. It has also been described by Badger and
White (2000) as a new-comer to ELT, which focuses mainly on this type of language
teaching.
The main focus of this approach, according to Muncie (2002), is on the reader and on
the conventions a piece of writing needs to follow in order to be successfully
accepted by its readership. Ivanic (2004) and Badger and White (2000) believe that
28
this approach again focuses on writing as a product, and in some ways is an
extension to product approach, but with attention being paid to how this product is
shaped according to different events and different kinds of writing. This approach
therefore includes the social aspects of the writing event, and makes broad
distinctions between narrative, descriptive, expository, and argumentative writing.
In the field of ELT, Dudley-Evans (1994) notes the similarities between product and
genre approaches, and outlines the main three stages to the genre approach: firstly,
teachers present students with a model of a particular genre; secondly, students
then perform tasks to generate structures expressing that genre; and finally, drawing
on the previous stages, they produce a short piece of writing. Hyland (2007)
summarises the main features of the genre approach as follows:
Explicit Makes clear what is to be learnt to facilitate the acquisition of writing skills
Systematic Provides a coherent framework for focusing on both language and contexts
Needs-based Ensures that course objectives and content are derived from students’ needs
Supportive Give teachers a central role in scaffolding students’ learning and creativity
Empowering Provides access to the patterns and possibilities of variation in valued texts
Critical Provides the resources for students to understand and challenge valued discourses
Consciousness-raising Increases teachers’ awareness of texts to confidently advise students on writing
Table (1.2) Main Features of Genre Approach
Many advantages have been associated with the genre approach. Johns (2003: 198)
for instance believes that individuals who are familiar with common genres create
shortcuts to the successful processing and production of written texts. He gives the
example of a person who writes a letter to an editor, or a memo, or a political brief
within a certain culture, and who will be able to use this prior knowledge to produce:
… a second socially-accepted text from the same genre. Thus, teaching within a
framework that draws explicit attention to genres provides students a concrete
opportunity to acquire knowledge that they can use in undertaking writing tasks
beyond the course in which such teaching occurs.
29
Furthermore, applying this approach acknowledges that writing is taking place in a
social situation, and shows students how real writers organise their texts, promotes
flexible thinking, and, in the long run, encourages informed creativity, since students
need to learn the rules before they can transcend them (Badger & White, 2000;
Aleid, 2000; Kay & Dudley-Evans, 1998). It is also possible, by employing this
approach, to engage in peer feedback activities before giving the teacher the final
draft. On the other hand, experts also are aware of possible drawbacks. Badgers and
White (2000) believe that it may lead teachers to undervalue the skills needed to
produce a text, and to see students largely as passive learners. Kay and Dudley-Evans
(1998: 311) further criticise this approach as “restrictive, especially in the hands of
unimaginative teachers, and this is likely to lead to lack of creativity and
demotivation in the learners. It could become boring and stereotyped if overdone or
done incorrectly.” Like the process approach, genre approach recognises feedback as
a key element in writing classes where, according to Hyland and Hyland (2006),
teachers can build on learner’s confidence and literacy resources to participate in
the target communities.
From the previous discussion of the literature, it can be concluded that no one
approach to teaching writing is superior to the others. Therefore, it is better for
writing teachers to consider a variety or a mix of approaches, their underlying
assumptions, and the practice that each philosophy generates, as Badger and White
(2000) and Raimes (1991) recommend. Asiri (1997) similarly suggests that an
integration of different approaches, taking into account the different types of
students, their processes and purposes of writing, their needs, their readers, their
30
writing contexts and the whole academic and social settings of the writing activity,
could give the most satisfactory results.
2.2.2 Feedback in Writing
An Overview of Feedback in Writing
This section begins with a brief discussion about feedback in general, which
progressively develops into a more detailed argument. According to Kepner (1991:
141), the term “feedback” in its broad context (as generally used in the ESL
literature) could be defined as “any procedure used to inform a learner whether an
instructional response is right or wrong.” However, this abstract definition might not
be suitable for this study, because writing as seen by Asiri (1997: 5) is a creative
activity, and therefore it is not enough to confine the feedback merely to informing
the writer that his or her responses are right or wrong. Thus, for the purpose of this
research, Freedman’s (1987: 5) comprehensive definition will be adopted, which
includes different aspects of feedback (i.e. teacher feedback, conferencing, and peer
feedback).
1
She states that feedback on students’ writing “includes all reactions to
writing, formal or informal, written or oral, from teacher or peer, to a draft or a final
version. It can also occur in reaction to talking about intended pieces of writing, the
talk being considered a writing act. It can be explicit or less explicit.” This study
examines the efficacy of two commonly-used techniques of feedback in teaching
writing: teacher feedback and peer feedback, bearing in mind that peer feedback is
still considered a novel concept in the Saudi educational context, as explained below.
1
With the exception of self-correction, which is not within the scope of this study
31
The Significance of Feedback
The importance of feedback has been acknowledged by many researchers and
experts, who recognise its important role in increasing learners’ achievements, and
its central role in writing development. Many studies such as Ferris (2002), Hyland
and Hyland (2001) and Ashwell (2000) suggest that feedback is beneficial for both
beginners and expert writers, because it makes them evaluate their writing and
notice possible points of weaknesses. These studies then contend that feedback
helps students by creating the motive for doing something different in the next
draft; thoughtful comments create the motive for revising. Without comments from
their teachers or their peers student writers would revise in a piecemeal way, and
without comments from readers, students assume that their writing has
communicated the intended meaning, and hence see no need for revising the
substance of their text. Feedback also makes students realise the level of their
performance, and shows them how to improve it to a satisfactory level.
Furthermore, not providing students with feedback may cause confusion, leaving
them unaware of the aspects of their writing that need to be reconsidered, and thus
causing their efforts to be misdirected, as mentioned in the previous section: the
nature of ESL writing (Miao et al., 2006; Hyland, 2003; Ferris, 2002; Hyland & Hyland,
2001; Ashwell, 2000; Hedge, 1988; Zellermayer, 1989; Robb et al., 1986, Freedman,
1987; Cardelle & Corno, 1981). Feedback is helpful not only for students who
receive it, the literature also suggests that feedback is important for teachers as well,
because it gives them the opportunity to diagnose and assess the problematic issues
in learners’ writing, and allows them to create a supportive teaching environment
(Hyland & Hyland, 2001; Miao et al., 2006). However, as Gibbs and Simpson (2002)
32
mention, feedback needs to meet certain criteria, such as the need to be specific and
to focus on learning and process, rather than on students themselves, in order to be
effective.
Teacher-Written Feedback
This type of feedback is probably the most traditional and commonly-used technique
of responding to students’ writing in every teaching context where writing teachers
are usually the sole providers of comments to their students. Despite emphasis on
alternative feedback techniques including oral responses and peer feedback, Hyland
and Hyland (2006) believe that teacher-written feedback still plays a central role in
L2 writing classes. Research about teacher-written feedback falls into two main
categories; the first looks into teachers’ actual performance and self-assessment,
while the other looks at the topic from the students’ perspective (Montgomery &
Baker, 2007; Hyland & Hyland, 2007; Ferris & Hedgcock, 2005; Chandler, 2003;
Ferris, 1995 & 2002). As far as the first category is concerned, teachers’ feedback can
take the form of praise (positive comments), criticism (negative comments), or
suggestions (constructive criticism) (Hyland & Hyland, 2001). Different techniques
can be employed to deliver these, such as providing a written commentary, which is
generally considered to be the most widely-used form among teachers. Ferris and
Hedgcock (2005) believe that comments normally take the form of marginal or
terminal comments. However, according to Hyland (1990 & 2003), teachers
sometimes provide their students with an audio recorded commentary. Some even
prefer to provide feedback via compact discs or e-mails, which is described by
33
Hyland (2003) as electronic commentary. Regardless of the forms teacher feedback
can take, these techniques usually take two general shapes:
1. Direct feedback (explicit/overt) – using this format teachers tend to give precise
corrections or structure notes on students’ mistakes.
2. Indirect feedback (covert) – in which teachers give students indications that they
have made mistakes.
There are also many techniques that can be used to indicate errors, such as:
a) Marginal error feedback: in which the margin is used to indicate the number of mistakes
in each line.
b) Coded error feedback: in which a coding system is adopted to indicate the mistake such
as abbreviations or symbols.
c) Uncoded error feedback: whereby the mistakes are underlined or circled without
mentioning the type of mistake made.
(Accumulated from: Ferris, 2002; Lee, 1997; Enginarlar, 1993; Robb et al., 1986).
The following table shows the directness of various types of teacher feedback, where
the first item (correction) represents direct feedback, and the subsequent items
represent variations of indirect feedback:
Table (1.3) Feedback Methods. From Robb et al., (1986: 87).
Another aspect of teacher-written feedback that has also been thoroughly
investigated is the distinction between comments on local issues, also known as
form feedback, and global issues (content feedback).
As for the other category of research, students’ perceptions of teacher-written
feedback, research shows that students, like their teachers, feel that this feedback is
an important part of the writing process. This case is especially true with ESL
34
students in particular, who, despite the reported undesired effects of teacher-
written feedback, think that it could possibly improve not only their writing, but their
L2 grammar as well (Montgomery & Baker, 2007; Ferris, 2002 & 1995; Hyland, 1998;
Hedgcock & Lefkwitz, 1994). One interesting finding of studies such as Ferris (1995)
and Ware and O’Dowd (2008) is that ESL students want their teachers to focus more
on local issues than on global ones, a fact that should be carefully considered when it
comes to responding to these students’ writing which, as Ware and O’Dowd put it,
can be achieved by making a balance between fluency and linguistic accuracy.
However, the question of whether L2 teachers should focus on local issues is a
subject of heated debate, which must be overlooked for now (c.f. Truscott, 1996,
2004 & 2007; Ferris, 2004; Goldstien, 2004).
Teachers’ comments on linguistic errors in writing have been a subject of severe
criticism by Trucott (1996, 2004 & 2007), who suggests that grammar correction is
not only useless, unsystematic, and arbitrary, but can also deteriorate students’
subsequent writing and compromise their overall achievement. He suggests that
acquiring grammatical patterns is a very complex process, and teachers should never
intervene; any attempts are, according to him, a waste of teachers’ and students’
valuable time and effort. Many subsequent studies tried to refute Truscott’s
conclusion and defended using grammar feedback in ESL writing classes. For
instance, Ferris (1999: 2) mentions that his ideas are “premature and overtly strong.”
She along with other researchers, including Lee (1997), Ashwell (2000), and Chandler
(2003) believe that students cannot be left without any guidance; errors that go
unnoticed can be fossilised, and, referring to the fact that students expect correction
35
from their teachers, they also believe that it is therefore the teachers’ responsibility
to provide such feedback. Other criticisms mentioned in Ferris (2006) and Reid
(1993) include feedback not being text-specific, being incorrect, not addressing the
issues it intends to, and mismatching between the feedback students want or expect
and what is actually given.
Peer Feedback
Peer feedback, which is also known in the literature as ‘peer review’ (Mangelsdorf,
1992), ‘peer editing’ (Daniels & Zemelman, 1985; and Keh, 1990), ‘peer evaluation’
(Keh, 1990; and Chaudron, 1984), ‘peer critique’ (Keh, 1990; and Hvitfeldt, 1986),
‘peer commentary’ (Connor & Asenavage, 1994) and ‘peer response’ (Urzua, 1987;
Keh, 1990; Di Pardo & Freedman, 1992; Nelson & Murphy, 1993; Liu & Hansen, 2002;
Ferris & Hedgcock, 2005), can be defined as the:
use of learners as sources of information and interactants for each other in such
a way that learners assume roles and responsibilities normally taken on by a
formally trained teacher, tutor, or editor in commenting on and critiquing each
other’s drafts in both written and oral formats in the process of writing. (Liu &
Hansen, 2002: 1)
According to other experts such as Pol et al. (2008), Rollinson, (2005) and Topping
(1998, 2000), peer feedback can also be defined as an educational arrangement, in
which students comment on their fellow students’ work for formative or summative
purposes. Storch (2004) reported that peer feedback rests on a strong theoretical
and pedagogical basis, which, in terms of the former, follows the model of social
constructivist view of learning, and as far as pedagogy is concerned reinstates the
concept of communicative approach to language learning. Storch also believes that
despite the strong bases of peer feedback, the use of peer feedback in the classroom
36
is quite limited. It is not only that the use of peer feedback is limited in classroom
settings, because peer feedback research is especially limited in ESL/EFL settings.
However, as Saito and Fujita (2004) suggest, a large body of research into peer
assessment in various areas covered by psychology and mainstream education has
been conducted. The findings suggest that peer response is indeed consistent, and
can be used as a reliable assessment tool in schools.
Peer feedback takes many forms and serves many purposes. It has already been
mentioned that it can be employed in the form of conferencing, in the form of
written as well as oral comments, or both simultaneously. This ‘flexibility’ is another
useful aspect of peer feedback (Mooko, 1996; Hyland, 2003; Rollinson, 2005). Peer
feedback can also take many formats, some of the most common ones being: 1) to
assign groups of two, three, or four students and ask them to exchange their first
drafts and give comments on each others’ drafts before making final versions; 2) to
make students read their own essays aloud, or get a colleague to read it instead,
while the other students listen and provide feedback, either written or oral, on the
work that they have just heard; 3) is not to restrict feedback to the time after
students have written their essays, because it is possible for students to use this type
of feedback in the pre-writing stage by asking other students to comment on each
others’ outlines, or to carry out a brainstorming session (Hyland, 2003).
Advantages and Disadvantages of Peer Feedback
Many studies have recommended the use of peer feedback in ESL writing classes for
its valuable social, cognitive, affective and metalinguistic benefits (Lundstorm and
37
Baker, 2009; Pol et al., 2008; Min, 2008; Rollinson, 2005; Storch, 2004; Saito & Fujita,
2004; Hinkel, 2004; Ferris, 2003; Yarrow & Topping, 2001; Hyland, 2000; Reid, 2000;
Ferris & Hedgcock, 1998; Zhang, 1995; Mendonça and Johnson, 1994; Jacobs, 1989;
and Chaudron, 1984). Yarrow and Topping (2001) for instance mention that peer
interaction is of great value, and the method is recognized by many educational
organizations, as evidenced by recommendations by the Scottish Office Education
Department. Hyland (2000) also adds that peer feedback encourages more student
participation in the classroom, giving them more control and making them less
passively teacher-dependant. Ferris and Hedgcock (2005), Saito and Fujita (2004),
Storch (2004) and Ferris (2003) add that peer feedback helps learners become more
self-aware, in the sense that they notice the gap between how they and others
perceive their writing, thus facilitating the development of analytical and critical
reading and writing skills, enhancing self-reflection and self-expression, promoting a
sense of co-ownership, and hence encouraging students to contribute to decision-
making, and finally, it fosters reflective thinking. As for the collaborative component
of peer feedback, Yarrow and Topping (2001: 262) confirm that peer feedback plays
a significant role in “increased engagement and time spent on-task, immediacy and
individualisation of help, goal specification, explaining, prevention of information
processing overload, prompting, modelling and reinforcement.” The literature also
suggests that peer feedback is more authentic and honest than a teacher’s response,
and it gives students the opportunity to realize that other students experience
similar difficulties to their own, and it can also lead to less writing apprehension and
more confidence. Peer feedback can also help develop learners’ editing skills, and
establish a social context for writing. More importantly, peer feedback internalizes
38
the notion of ‘audience’ into the minds of student writers, because it provides
students with a more realistic and tangible audience than their teacher, which in
turn assists them in producing ‘reader-oriented’ texts (Lundstorm and Baker, 2009;
Hinkel, 2004; Storch, 2004; Hyland, 2000; Reid, 2000; Ferris & Hedgcock, 1998; and
Chaudron, 1984). Lundstorm and Baker (2009) in a recent study also revealed that
peer feedback can be as beneficial to students who provide it as to those who
receive it, if not more.
On the other hand, Ferris and Min (2008), Hedgcock (2005), Rollinson (2005), Hinkel
(2004), Saito and Fujita (2004), and Hyland (2002) also believe that ESL students will
always question the purposes and advantages of this technique which is particularly
true with students who are accustomed to teacher-fronted classroom. The main
criticism is that they instinctively feel that a better writer such as their teacher is the
one who is qualified to provide them with useful comments, so there is arguably the
preference issue, which can act as a barrier to the success of peer sessions. In fact,
some students might view receiving comments from colleagues whose English is at
the same or even at a lower level than theirs as not being a valid alternative for the
‘real deal’ and hence they might resist group-centred peer review activities. Hyland
(2000) mentions that this is not necessarily a bad thing, as students can make ‘active
decisions,’ by which she means they can choose which comments to accept and
which ones to reject; another way of giving students more control in the classroom.
Other studies such as Min (2008) claim that peer feedback makes only a marginal
difference in students’ writing, but other types of feedback have been accused of
exactly the same outcome, including teachers’ comments, yet teachers, as well as
39
students, feel that feedback is an integral part of any ESL writing class. Hinkel (2004),
citing a study by Carson and Nelson (1994), also mentions that some students found
it difficult to provide honest feedback because they prioritized positive group
relations rather than improving their writing. Another issue with peer feedback was
mentioned in Hyland (2002), who says that both NS and NNS students perceived
revision as error correction, and hence were culturally uncomfortable because they
felt that error correction criticizes people. Hyland (2000) mentions that there are
other cross-cultural issues involved in peer feedback, especially if students are from
a large variety of cultural and educational backgrounds. These issues include conflict
or at least high levels of discomfort among members of the peer feedback group.
She then recommended more longitudinal and naturalistic research to be carried out
in order to better understand these issues and find solutions. In some cases it was
found that incorporating peer feedback could weaken students’ writing. However,
despite all these criticisms, feedback in general is still highly appreciated, especially
by NNS students (see NS vs. NNS section). Storch (2004) also found that most peer
responses focused on product rather than the processes of writing, and many
students in L2 contexts focused on sentence-level errors (local errors) rather than on
the content and ideas (global errors), a finding earlier noted by ESL teachers
themselves as Jacobs (1989) reports. Jacobs in fact mention that students
themselves might experience difficulties in peer sessions resulting from their limited
knowledge of ESL writing. Saito and Fujita (2004) additionally report that a number
of studies indicate that there are a number of biases associated with peer feedback
including friendship, reference (teachers using different criteria from students),
purpose (development vs. grading), feedback (effects of negative feedback on future
40
performance), and collusive (lack of differentiation) bias. However, the researchers
admit that these biases can be found in most rating techniques, including teacher
and peer feedback, and the focus should be on how to minimize them.
Other Types of Feedback: Conferencing, Self-Correction and Keeping Logs
In addition to teacher’s written feedback and peer feedback, Bitchener et al. (2005),
Ferris and Hedgcock (2004), Hyland (2000, 2003), Ferris (2002), Keh (1990), and
Zamel (1985) also add teacher-student face-to-face conferencing, self correction,
and keeping error logs as other valid techniques of feedback. In conferencing the
teacher and the students negotiate the meaning of a text through a dialogue. Like
the two previous techniques, conferencing has its advantages and disadvantages, all
of which have been thoroughly investigated by these researchers and many others.
The other two types are self explanatory. However, these techniques will not be
thoroughly investigated because they are, first of all, not among the techniques that
will be used in the empirical study, and secondly, the available research into these
types is insufficient.
2.2.3 Introducing Peer Feedback to ESL Students
Although many researchers stress the significance of peer feedback in ESL writing
classes (e.g. Habeshaw et al., 1986; Ferris, 1997; Berg, 1999; Hyland, 2000; Ulicsak,
2004; Rollinson, 2005; Ferris & Hedgcock, 2005), many ESL tutors still find
themselves reluctant to introduce peer feedback in their ESL writing classes. Such
reluctance, according to Saito and Fujita (2004), might be based on fears that the
results could be unreliable, students can be resentful, and the experience may be
41
chaotic. It is important to differentiate between the concepts of ‘feedback’ and
‘assessment’ as the former refers to any procedure used to inform learners whether
their instructional response is right or wrong with the purpose of improving learners’
skills hence it is part of the learning process (see section 2.2.2) while the latter
usually happens after teaching and learning are over and acts with accordance of
giving marks. Another distinction is between formative and summative assessments
(see section 5.1) because feedback is an intrinsic part of formative assessment but it
might or might not be part of summative assessment. It is also important to note
that working in groups is not an intrinsic skill, it is rather a learned skill, and,
according to Ulicsak (2004) and Rollinson (2005), teachers have to create the
environment that supports students to collaborate with each other. In order to
minimize or even avoid undesired results, careful planning and implementation of
peer feedback techniques are required. Lundstorm and Baker (2009); Min (2006),
Saito and Fujita (2004) and Habeshaw et al. (1986) suggest a number of broad
principals to prepare and apply peer feedback in the university context which are all
dependant on the unique needs of students involved; university students have to
start peer assessment as early as possible in the first term, before they are set in
their ways, because students are more willing to try peer feedback and peer
assessment in early stages which do not usually contribute to students’ final results.
It is also recommended at the early stages of peer feedback to start with small tasks,
as little as just one element of assessment, in order to make students feel that they
are not taking a great risk. Moreover, peer feedback tasks in early stages have to be
relatively easy, and, when students are asked to comment on their peers’ scripts
and/or assess them, clear marking criteria and guidelines should be explained and
42
introduced. Students must be given a clear rationale for peer feedback, and
procedures to be followed. A possible scenario to achieve this would be to get
students to agree to the procedures and then ask them to adhere to them. It is also
recommended to get students to practice peer feedback before they provide actual
feedback and assessment that affect grades. The teacher must provide responses to
students’ peer feedback, which in turn helps enforce proper standards. Finally,
teachers are encouraged to have a positive attitude towards students’ efforts, and to
use anonymous scripts for peer feedback and assessment, in order to make students
feel less exposed and to overcome subjectivity. Saito and Fujita (2004) also
recommend teachers to set out clear criteria, foster understanding of goals and
limits, and develop familiarity with the instrument.
In order to structure a successful peer feedback exercise, Berg (1999) specifies the
following points, and recommends teachers to consider them when applying peer
feedback: 1) having a comfortable classroom atmosphere; 2) the role of the peer
response in the writing response should be made clear; 3) students must
acknowledge the role of peer feedback in academic writing, and they should also
recognize that even most successful professional writers benefit from peer
comments; 4) anonymity, noting the main idea of the anonymous text in some
detail, and ambiguities as well as obvious flaws in organization, support, unity,
grammar and spelling – in other words, students should focus on rhetoric-level
aspects rather than ‘cosmetic’ sentence-level errors; 5) opinions expressed in peer
responses have to be appropriate in terms of vocabulary and expressions used –
general comments such as ‘your writing is bad’ should be avoided, and alternatives
43
such as ‘you need to provide more clarification here’ should be used; 6) students
should use a support tool, such as Berg’s (1999) response sheet, to help them
comment on specific areas of writing; 7) groups of students can benefit from each
other’s collaborative writing projects and from responses to these projects; and
finally, 8) students when engaged in collaborative writing projects should be
introduced to revision strategies and guidelines. Habeshaw et al. (1986) also add the
following points: 1) teachers should brief their students with the procedures of peer
feedback, and provide them with detailed information about different stages of the
process and time allocated for each stage, and students must be encouraged to ask
for clarification when needed; 2) students should be reminded of peer response
criteria, and teachers are encouraged to provide students with copies or handouts of
the criteria; 3) the process of providing peer feedback should be organized, and each
script should be marked by at least three students; 4) teachers must introduce
‘safeguard’ techniques to avoid bias or any undesired influences on feedback; 5)
teachers and students should agree on a marking scheme should peer feedback
contribute to grading; and finally, 6) students should reflect on their experience to
identify problems and suggest solutions. Teachers, on the other hand, should
organize the process and report the findings back.
Because peer feedback involves group work, it can be seen as a collaborative
learning practice (see sections 2.3.1 and 2.3.2). One important distinction has to be
made between pair and group work, as noted by McDonough and Shaw (2003), as
they obviously reflect different social patterns. Pair work also requires little
44
organization on the part of the teacher, whereby a group is by its very nature a more
complex structure.
The following table summarises feedback studies in ESL writing as appear in the
literature review. I was interested in a number of issues when I created this table
including who were involved in the study and how the researchers evaluated
students’ writing. Another important issue that will be discussed in the following
chapter is the location in which participants were studying ESL writing.
STUDY
PARTICIPANTS/LENGTH OF
STUDY
TYPE OF WRITING EVALUATED TREATMENT GROUPS
Lundstorm and
Baker (2009)
92 Students in 9 writing
classes in ELC Brigham Young
University
Pre and Post Writing Tests
1) Control Group: Receivers of PF (n=46)
2) Experimental : Givers of PF (n=44)
Ellis et al. (2008)
49 Japanese University
Students
Pre-Test, Immediate Pro-Test
and Delayed Pro-Test
1) Focused Corrective Feedback (n=18)
2) Unfocused Corrective Feedback (n=18)
3) Control Group (n=13)
Ware and
O’Dowd (2008)
98 Students from the US,
Spain and Chile
Monolingual Online Exchange
and a Telecollaborative Project
1) E-tutoring (Phase 1, n = 13, Phase 2, n= 28)
2) E-partnering (Phase 1, n = 13, Phase 2, n = 44)
Al-Hazmi and
Scholfield (2007)
51 Saudi ESL University-Level
Students
Pre and Post-Tests, Choice of 3
Tasks: Expository, Comparative
and Argumentative.
1) Peer Feedback and Checklist Group
2) Checklist Only Group
Miao et al.,
(2006)
79 Chinese University Level
Students / 3-Round Multi-
Draft Tasks
An argumentative, technology-
orientated essay
1) Teacher Feedback Class (n= 41)
2) Peer Feedback Class (n = 38)
Min (2006)
18 Taiwanese University
Students / One semester:
continuing from Min (2005)
2 Expository Essays (pre and
post-experiment)
Peer feedback training group: each student received 4 hours in-class
training and 1 hour reviewer-teacher conference
Bitchener et al.
(2005)
53 ESOL Immigrant Students/
12 Weeks
Four 250 Word Writing Tasks at
Weeks 2, 4, 8 and 12
Respectively
1) Full Time Class (direct feedback + 5 minutes teacher-student
conferencing) n=19
2) 10 hrs/wk Groups (direct feedback only) n=17
3) 4 hrs/wk Group (no feedback) n=17
Storch (2004)
23 ESL Students at an
Australian University/ 4
Weeks
Data Commentary Text
18 Students worked in pairs and were interviewed individually. Their
interaction as they worked collaboratively was tape-recorded
Peterson (2003)
33 Grade 7 – 8 Multiethnic
Students in a Canadian
School/ 2+ Years
A narrative composition that
takes five weeks to complete.
1) Informal Peer Interaction
2) Guided Peer Feedback Using Checklists
3) Formal Peer Response Group
Ashwell (2000)
60 Japanese EFL Students/
One 3-Draft Essay
3-Draft Essay
1) Control – No Feedback
2) Content then Form
3) Form then Content
4) Content and Form Simultaneously
Berg (1999)
46 Level 3 and 4 Students / 2
Terms
2 Assignments (pre-peer
response drafts and post-peer
response drafts)
1) Trained Peer Response (n= 24)
2) Untrained (n= 22)
Kepner (1991)
60 Intermediate Students/
One Semester
One Journal Entry Not More
than 200 Words
1) Surface-Level Error Correction
2) Message Related Comments Only
Robb et al.,
(1986)
134 Japanese EFL Students/
One Year
Pretest and 4 Narrative
Compositions
1) Correction of All Errors with Explanations (Direct Feedback)
2) Coded Correction
3) Uncoded (Highlighted)
4) Marginal with Number of Error by Line
Table (1.4) Recent Feedback Studies
45
2.2.4 Students’ Beliefs in Writing
Studies such as Li (2007), Joyce (2006), Wu (2006), White & Bruning (2005), Lavelle &
Zuercher (1999), and Geisler-Brenstien & Cercy (1991) which investigate students’
beliefs in writing usually focus on one or more of the following areas: students’
conception of writing, attitudes about themselves as writers, the need for personal
expression in writing, and eventually the relationship between students’ beliefs and
their learning outcome.
Students’ beliefs are somehow affected by different writing approaches. For
example, the way students revise their texts in process writing differs according to
their level as Lavelle and Zuercher (1999) report. ‘Elaborative revisionists’ use
writing as a way of changing their thinking which contrasts the idea of writers at
lower levels who report that writing is a painful experience in this regard.
Another theme emerges from Joyce (2006) and Wu (2006), both of whom discovered
that many students not only believed they did not write well, but they could not
obtain the tools needed to learn how to write. This belief of negative self-efficacy
affected the quality of both their writing and their attitudes about writing.
Finally, as the relation between beliefs and performance continues, Wu (2006) and
White & Bruning (2006) give further support to the theory that students’ beliefs do
affect their choice of writing processes and strategies. Students with negative beliefs
score low on organisation and overall writing quality while students with more
positive beliefs score high on both areas.
46
2.2.5 Writing Assessment
The main purpose of including this section about writing assessment and evaluation
methods is to help design a reliable writing assessment tool to be implemented in
the following empirical stage, which is referred to as ‘writing tests, stage 3,’ and is
discussed in detail later in the methodology chapter.
Assessment and Feedback
‘Assessment’ is different from ‘feedback,’ even if these concepts are very similar and
interrelated at some points. The main focus of this research project is to evaluate the
effect of two different types of feedback which explains why ‘writing assessment’ as
a technique would be used in the data collection phase to help evaluate the
effectiveness of the different types of feedback. Cohen (1994) believes that assessing
writing abilities can be a real challenge because there are numerous features in
writing that can be included in the actual process of evaluation. These features
include:
· Content: depth and breadth of coverage
· Rhetorical structure: clarity and unity of the thesis
· Organization: sense of pattern for the development of ideas
· Register: appropriateness of level of formality
· Style: sense of control and grace
· Economy: efficiency of language use
· Accuracy of meaning: selection and use of vocabulary
· Appropriateness of language conventions: grammar, spelling, punctuation
· Reader’s understanding: inclusion of sufficient information to allow meaning to be
conveyed
· Reader’s acceptance: efforts made in the text to solicit the reader’s agreement, if so
desired
Table (1.5) Features to be considered in assessing writing ability, from: Cohen (1994: 307)
47
Cohen admits that only some of these ‘dimensions’ are evaluated in any given
assessment of writing ability. There are some genuine factors that limit the number
of features to be considered in assessment, including time available for assessment,
cost of assessment, and relevance of the dimension for the given task, and the ease
of assessing that dimension.
According to Cohen (1994: 20), the authenticity of writing tasks can be improved by
means of some or all of the following:
1. Having a choice of interesting topics that are purposeful.
2. Clearly stating that planning is an essential part of the task, and, if required,
outlining the project.
3. Providing explicit information regarding the grading criteria.
As for the first recommendation, most topics discussed were part of the curriculum
but because the textbook was especially designed for ESL students, I would argue
that most of the topics were of relevance to the participants of the study regardless
of students’ context. The pre and post tests’ topics were a comparison between city
and country life, and a discussion as to why students would choose a specific
university respectively. The two remaining recommendations are self-explanatory.
Electronic and online Means of Writing Assessment
The reasons for including this section are, first of all, to acknowledge the existence of
alternatives ways of writing assessment and, second, to explain why they become
very popular in education technology research. Although I did not use any of the
online assessment tools in this research project for various reasons including the
relatively small number of participants in my writing tests and the shortcomings of
48
these programmes which can be easily avoided using conventional ways of
assessment.
The emergence of easily accessible online assessment programmes such as
DIALANG, ACTFL Writing Proficiency Scale and ETS CRITERION, is a serious attempt to
integrate new technology into the field of ESL writing, a field which until recently has
not benefited as much from current technological advances in language education as
other language skills according to experts like Alderson and Huhta (2005), and Luoma
and Tarnanen (2003). Despite all shortcomings in earlier or even current versions of
writing assessment programmes, they can still provide numerous advantages for
both teachers and learners alike. For example, the available research shows that
using automated assessment programmes can save language teachers’ plenty of
time and effort that otherwise would be spent on counting errors and providing
detailed feedback, a problem aggravated by large writing classes or with learners of
low writing proficiency levels. These applications can also provide students with
more frequent assessment opportunities enabling further testing and receiving
feedback as well as informing them about points of weaknesses they still need to
work on that would be possible with teachers in charge alone. Consecutive research
in ESL writing and feedback shows a very positive attitude towards more feedback by
students regardless of how beneficial this feedback is. Students can also benefit from
the fact that they are no longer tied to specific location and time to complete their
tests enhancing more flexibility and free environment.
With more recent developments in these programmes, it is now possible to have
adaptive, customized tests where the software draws writing tests from a pool of
49
items. One immediate positive effect of this feature is that students can have
different topics to write about. This feature is very helpful in situations where, for
example, pre and post tests or multiple attempts are required. It also minimizes
chances of cheating as students will be allocated different topics to write about.
Moreover, by reconfiguring the settings of the software, teachers can also choose
the items they want their students to focus on and they still can impose their own
criteria when responding to students’ writings maintaining the humanistic aspect of
the process. Writing assessment programmes can perform basic tasks such as
identifying individuals who require special attention and establishing fundamental
knowledge of subjects much faster and with more accuracy. As these programmes
are designed to generate statistical data, they can act as valuable sources of data for
teacher researchers.
Despite the sophistication these programmes have reached lately there are still
possible flaws with them. Some disadvantages of using these online assessment
programmes include, first of all, the arduous task of training and familiarizing
students with them which could prove to be exhausting, time consuming and, in the
case with commercial versions such as ETS, financially expensive. Moreover, using
automated assessment tools assumes by default the availability of necessary
technical infrastructure, which might not be the case everywhere. More technical
issues can also go problematic such as malfunctions, interference/usability issues,
Internet disruptions, and other technical issues. Although these programmes can
decrease or even remove the boundaries of time and location they can also mean
the absence of instructors, so students will not always be able to consult their
50
instructors when they have a problem, an issue that become especially acute when it
comes to international, self-assessment programmes like DIALANG (Alderson, 2000;
and Alderson & Huhta, 2005). The issue of quality assurance has also been
questionable. In fact, many reports claim that the feedback produced by these
programmes is not always trustworthy, credible and reliable, especially with
organization and content aspects of the written work.
Part Three: Collaborative Learning and Writing
2.3.1 Collaborative Learning
As previously mentioned, this section has been included because peer feedback is
considered by many researchers and experts in the field of ESL writing to be a
collaborative activity, and it is therefore essential to understand the theoretical
framework of collaborative activities to help better understand this type of
feedback. Such an understanding should also prove fundamental when it comes to
the application of such a technique in the context of the empirical study as shall be
seen in the following chapter.
Ulicsak (2004), McWham et al. (2003), Nunan (1992), Kohonen (1989), Kohonen
(1992) and Gaillet (1992), among many other experts, mention that collaborative
learning and teaching have emerged as significant concepts within the field of
language education. McWham et al. (2003) for example mention that college and
university students are increasingly being asked to work co-operatively and learn
collaboratively. These concepts are based on a vast pool of scientific, well-developed
philosophical perspectives and research traditions which include “ humanistic
education, experiential learning, systemic-functional linguistics, and
51
psycholinguistically motivated classroom-oriented research” (Nunan, 1992: 1). That
is in addition to the recent emphasis on teamwork in the business sector as
McWham et al. (2003) stress. Again, according to Nunan (1992) and McWham et al.,
(2003), there are several reasons for having collaborative learning in language
education. At the tertiary level of education, reasons include diverse student
population who need to develop ways of learning together, the increased emphasis
on learner-driven approaches such as peer learning, and student projects that often
require a team approach. Additionally, teachers might want to experiment
alternative ways of organizing teaching and learning, students might be more
concerned with promoting a philosophy of cooperation rather than competition,
researchers might want to create an environment in which learners, teachers and
researchers themselves are teaching and learning from each other in an equitable
way, and last but not least, curriculum designers might want to find ways to
incorporate principles of leaner-centeredness into their programmes. McWham et
al. add that research has shown that group learning leads to academic and cognitive
benefits and it helps promote learning and achievement, the development of critical
thinking skills aids in the development of social skills such as communication,
presentation, problem-solving, leadership, delegation and organization. Another
important application of collaborative learning and joint assessment as mentioned
by Dunworth (2007) is inter-professional education which is an emerging concept in
social work.
Kohonen (1992) argues that the whole concept of collaborative learning is a
reflection of the recent development in second language learning where the focus
52
has shifted away from ‘traditional behaviorist’ models which conceives teaching as
transition of knowledge towards ‘experiential’ models whereby teaching is seen as
transformation of existing or partly understood knowledge, based on the
constructivist views of learning. The following table (ibid) briefly illustrates the main
differences between language learning approaches perceived according to the
behaviouristic and constructivist models.
Dimension
Traditional Model:
Behaviorism
Experiential Model:
Constructivism
View of learning Transmission of knowledge Transformation of knowledge
Power relation Emphasis on teacher’s authority Teacher as a ‘learner among learners’
Teacher’s role
Providing mainly frontal instruction;
professionalism as individual autonomy
Facilitating learning (large in small groups);
collaborative professionalism
Learner’s role
Relatively passive recipient of
information; mainly individual work
Active participation, largely in cooperative
small groups
View of knowledge
Presented as ‘certain’; ‘application’
‘problem-solving’
View of curriculum
Static; hierarchical grading of subject
matter, predefined contents
Dynamic; looser organization of subject
matter, including open parts and integration
Learning experiences
Knowledge of facts, concepts and skills;
focus on content and product
Emphasis on process: learning skills, self-
inquiry, social and communication skills
Control of process Mainly teacher-structured learning
Emphasis on learner: self-directed learning
Motivation Mainly extrinsic Mainly intrinsic
Evaluation
Product-oriented: achievement testing;
criterion-referencing (and norm-
referencing)
Process oriented: reflection on process, self-
assessment; criterion referencing
Table (1.6) Traditional and Experiential Models of Education: A Comparison (Kohonen, 1992: 31)
Collaborative learning has many objectives which include establishing ‘positive
interdependence’ among the members in the group so learners work together for
mutual benefits, encouraging a sense of joint responsibility where learners care
about each others’ success as well as their own, and creating a feeling of social
support. These goals all together help learners develop higher self-esteem and self-
confidence as well as academic achievement. (Nunan, 1992 and Kohonen, 1992) In
order for language learners to perform successfully in collaborative work, Kohonen
(1992: 34 – 35) mentions five important factors these learners should possess. They
are:
53
1. Positive interdependence, a sense of working together for a common goal
and caring about each others’ learning.
2. Individual accountability, whereby every team member feels in charge of
their own and their teammates learning and makes an active contribution to
the group. Thus, there is no ‘hitchhiking’ or ‘freeloading’ for anyone in a
team – everyone pulls their weight.
3. Abundant verbal, face to face interaction, where learners explain, argue
elaborate and link current material with what they have learned previously.
4. Sufficient social skills, including an explicit teaching of appropriate
leadership, communication, trust and conflict resolution skills so that the
team can function effectively.
5. Team reflection, whereby the team periodically assess what they have
learned, how well they are working together and how they might do better
as a learning team.
Finally, Slavin (1983: 128) summerises the literature and reviews the argument
presented over collaborative learning:
… the research done to the present has shown enough positive effects of
cooperative learning, on a variety of outcomes, to force us to re-examine
traditional instructional practices. We can no longer ignore the potential
power of the peer group, perhaps the one remaining free resource for
improving schools. We can no longer see the class as 30 or more individuals
whose only interactions are unstructured or off-task. On the other hand, at
least for achievement, we now know that simply allowing students to work
together is unlikely to capture the power of peer group to motivate students to
perform.
2.3.2 Collaborative Writing
It has already been mentioned that the focus on collaborative learning has steadily
increased in language classrooms especially in the course of the last few decades.
This interest becomes very evident in one of its significant applications, collaborative
writing, which will be the focus of this section.
Collaborative writing is an increasingly widespread activity in ESL writing classes as
well as in professional writing contexts where two or more writers work together to
produce a shared piece of writing. To put this fact into perspective, Ede and Lunsford
(1990) mention that 85% of the documents produced in office and universities had
54
at least two authors. The literature indicates that collaborative and cooperative
learning has become part of most curricula at all levels of education. Teachers
routinely assign students small group tasks that involve giving and taking feedback
and working together to accomplish a common purpose. (Gaillet, 1992)
The popularity of collaborative writing exercises among ESL educators and
curriculum designers alike can be explained not only by means of recent empirical
findings but also because of the many theoretical, empirical and practical advantages
it offers over individual writing. Nunan (1992) for instance mentions that the recent
empirical work in literacy instruction has supported the theoretically-motivated
arguments in favour of cooperative learning. With regard to its advantages,
collaborative writing according to Noël and Robert (2003) can save time and effort, it
is more likely to produce more viewpoints and ideas, and it can also ensure that
subsections of professional papers are written by experts in the field. Nunan (1992)
reflects on an a case study when a group of learners were involved collaboratively in
programme planning and implementation, and he then mentions the following
advantages of collaborative learning: students learn about learning so they learn
better, collaborative learning encourages them to increase their awareness about
language and about self and hence about learning, it helps students develop
metacommunicative as well as communicative skills, it helps students to confront
and come to terms with the conflict between individual needs and group needs both
in social and procedural terms as well as linguistic and content terms, it helps
students realize that content and method are inextricably linked, and finally, it helps
them recognize the decision making tasks themselves as genuine communicative
55
activities. In a wider context and in more practical terms, collaborative learning
entails students working together to achieve common learning goals and it stands in
contrast with competitive learning (although they can coexist in ESL contexts).
Murray (1992) believes that in order to prepare ESL students for authentic situations,
they must experience collaborative writing by means of incorporating collaborative
learning strategies into ESL writing classes. Murray argues that if we understand how
native speaker participants collaborate, we will then be able to determine effective
ways of using collaborative writing in the ESL classroom. Roughly speaking,
collaborative writing can be divided into two types: paper-based interactions and
oral-based discussion. The former is more associated with editing and publishing
settings and it addresses actual writing itself not the processes involved in
developing the text. It is important to note the social dimension of collaborative
writing as Murray (1992: 103) mentions, “Collaborative writing was essentially a
social process through which writers looked for areas of shared understanding.”
56
CHAPTER THREE: METHODOLOGY
Overview of Chapter Three
The methodology chapter is divided into three main parts. The first looks into the
research question, the context of the study and the research population. The second
is more substantial and investigates the theoretical bases upon which the
methodological framework was built. This necessitates explaining the data collection
methods and how they were designed and developed, in addition to other
methodological concerns such as the validity of the research area and research
ethics. Finally, the last part looks at how the collected data were processed and
analysed, which tools were used in the analysis process, and how the data were
represented.
PART ONE: RESEARCH QUESTION, CONTEXT AND RESEARCH POPULATION
3.1.1 Research Gap and Research Questions
Research Gap
The previous chapter shows that most peer feedback studies in the literature
investigate one or more of the following issues; students’ perception of peer
feedback and obstacles that could affect its progress (Miao et al., 2006; and Storch,
2004) , training students in peer feedback sessions (Min, 2006; Peterson, 2003; and
Berg, 1999), how peer feedback activities should be executed (Bitchener et al.,
2005), types of errors addressed in peer comments (Ashwell, 2000; and Kepner,
1991) and how feedback could affect students’ subsequent writing in the short and
57
long run (Ellis et al., 2008). Many studies conduct the pre-, post-tests technique to
assess the progress of students writing before and after the experiment (Lundstorm
and Baker, 2009; Ellis at al., 2008; Al-Hazmi and Scholfield, 2008; Min, 2006; and
Berg, 1999). Most studies also compared peer feedback to teacher-written feedback
and in some cases other types of feedback such as conferencing (Miao et al., 2006;
and Bithcener et al., 2005). As far as the educational context is concerned, most of
these studies were carried out in Asia. For example, Ellis et al., (2008), Ashwell
(2000) and Robb et al., (1986) did their studies in Japan, Miao et al., (2006) in China,
and Min (2006) in Taiwan. The only published study carried out in a Saudi context
was that of Al-Hazmi and Scholfield (2008) which included 51 ESL university-level
students divided into two groups, one which uses peer feedback and checklists and
the other which uses checklists only.
The review of the literature clearly shows that, first of all, peer feedback research in
the Saudi context is very scarce, and, secondly, although many studies followed the
pre-test, post-test method to evaluate students’ performance before and after an
experiment, a very limited number of studies investigated if students’ perception of
peer feedback could have changed as a result of the experiment. Although this study
does not attempt by itself to establish a relationship between students’ performance
and their beliefs, a field which could benefit from more investigation, it can
nevertheless recommend a template for future research where such a relationship
could be thoroughly investigated.
58
Research Questions
With regard to the research gap already established in the literature review and
summarised in the previous section, the research questions are:
1. How can the integration of peer feedback as a collaborative/communicative
learning technique into ESL writing classes help improve students’ writing
skills?
2. To what extent does peer feedback help learners improve their skills when
compared with students who receive only teacher-written feedback?
Research Sub-Questions: Testing Variables and Rationale
In order to answer the above main research question, the following sub-questions
will be investigated:
1. What are Saudi ESL university-level students’ initial perceptions of teacher-written
feedback and peer feedback?
2. Will peer feedback help students gain new writing skills and improve existing ones?
3. How do these students feel about the integration of peer feedback into ESL writing
classes?
4. Will students’ initial perceptions of different feedback techniques change by the end
of the experiment?
The first and the last sub-questions investigate how ESL students perceive the
various techniques of feedback, and they aim to reveal Saudi adult ESL students’
preferences, attitudes, and beliefs, and if these students are going to modify their
views as they are introduced to the non-traditional techniques of collaborative
learning. The reason why the researcher is interested in ESL students’ points of view
is that their beliefs and preferences have been reported to have a significant
influence over their current and subsequent performance when they learn ESL
writing, as reported by researchers such as Kepner (1991) and Ferris (2002). The
researcher also aims to investigate if students’ beliefs and preferences will have their
impact on the level of acceptance of peer feedback by respondents, who will be
59
involved in the quasi-experiment study. In order to collect the necessary data for the
first and last sub-questions, the researcher planned to use purpose-built, non-
standardized, semi-structured questionnaires that will be discussed in detail below.
As the second sub-question has a more practical nature, the researcher planned a
quasi-experiment which involved entry and exit writing tests to assess students’
performance before and after the treatment. The purpose was to discover if there
would be any difference in the results of the experimental group and the control
group. The researcher carried out fieldwork which extended for a whole semester
and involved actual teaching in the institute these ESL students were attending. The
results should give the researcher strong evidence to decide if the group trained to
use peer feedback performed differently from the control group. The hypothesis
being questioned is that students in the experimental group would outperform their
counterparts in the control group, the null hypothesis is that no significant difference
in their performance would be recorded and the alternative hypothesis is that the
experimental group would perform less well than the control group.
Finally, for the third sub-question, the researcher used a task-based, semi-structured
interview to supplement the data gathered from questionnaires and to give an in-
depth insight into the subject matter. This qualitative method helps the researcher
better understand the processes involved in the actual application of peer feedback
during the experimental phase, as well as offering a better opportunity for
respondents to elaborate on their answers in the questionnaire. Furthermore, the
multi-methodological triangulation achieved by applying both quantitative and
60
qualitative measures serves the purpose of validating the results, where data
produced by one tool could be cross-checked against data produced by the other
tool (see section 3.2.5 of this chapter). Triangulation is also a valid technique to
check the consistency of the data gathered (Bryman, 2004; Cohen et al., 2000 &
2007). In fact, the interviews gave respondents more space to comment on their
beliefs and experience. Discussion of to data collection methods, validity, reliability
and other equally important issues continues in the following sections.
3.1.2 The Context of the Study
General Educational Background: EFL in the Saudi Context
A briefer section about teaching English in Saudi Arabia has already been included in
the literature review chapter. This part however is slightly different from sections
(2.1.3) and (2.1.4) in the literature review because this part tackles issues more
connected to the research population actually involved in the study rather than
general statements about teaching ESL in SA. This part therefore contains detailed
descriptions of the participants of the study.
ESL in the Department of Foreign Languages, KAAU
Although all students who join the department are expected to have successfully
completed at least six years of formal education learning EFL as a requirement (see
previous section), few of them actually achieve satisfactory results in their entrance
exams when joining a Saudi university (Asiri, 1996; Alhazmi, 1998; Grami, 2004). As a
result, the department has integrated obligatory basic remedial English courses for
low-achievers in grammar, reading and vocabulary, speaking and listening, and
61
writing, before embarking on advanced courses in either linguistics or English
literature. Although there is no English placement test on graduation, the
information provided by the Department suggests that most students show a good
level of progress, and many of those who took English level exams such as TOEFL
have supported this assertion. Unfortunately, exact figures are not available.
Although this might always be possible, the English department endeavours to
graduate students with sufficient language proficiency, both written and spoken. All
graduates are also expected to achieve a good level in academic English.
For writing and composition, the Department requires all students to successfully
complete four compulsory courses in writing. The textbooks normally used for
teaching the two introductory writing courses (coded LANE 213 & 216) are
Interactions I and II respectively.
3.1.3 Participants of the Study
Bearing the research question in mind, this study targets ESL students at
intermediate to high-intermediate levels with various mastery levels of ESL writing
techniques and skills. Due to the absence of official records of students’ proficiency,
the researcher considered the option of targeting students who have successfully
completed at least one semester in the department as a plausible, easily accessible
measure of their level.
62
Students’ level in the
university
Students’ age
Number of completed
ESL writing courses
N
Valid 73 73 73
Missing 0 0 0
Mean n/a* 20.58 n/a*
Std. Deviation n/a* 1.499 n/a*
Minimum 1 19 1
Maximum 5 27 4
Table (2.1) Participants of the Study (*n/a means not applicable)
The participants of the first stage of the project (n=73) were all male students, and
were all registered in an ESL writing course in KAAU. Their ages varied from between
19 to 22 years-old (93.2%), averaging 20.5 years-old, with only 7 students aged
above 22. As for their level in the university, most of the students were in their first
or second year (61.6%). 31.5% were in their third or fourth year, and five more
students were beyond the fourth. The majority of students chose English as their
first preference in the university (77.8%), while the remaining 16 students had other
first options but they eventually had to register in the English department for various
reasons. Most students completed one course or more in English writing before they
registered in 216, rendering them, on paper at least, on levels above beginners.
2
The University’s policy states that all students must decide on three majors they are
interested in, arranged according to the level of preference. Students will then be
allocated one of their chosen modules, depending on how many factors (especially
their GPA) satisfy the departments’ requirements. Other variables included the
2
Reasons why the English Department might not be students’ first choice include: some students do
not have the prerequisite type of education to study at their first choice department, external reasons
like better job opportunities for English graduates makes some students choose English instead of
their initial first choice, or because of the quota system in place in the faculty which sometimes
appoint students to departments other than their first choice.
63
students’ type of formal education (private or public), years of learning English prior
to the university, and number of successfully completed writing courses in the
department (if applicable).
The participants of the subsequent stages of the research project were all drawn
from these 73 students following a progressive research design. With regard to
students’ proficiency level, I used the writing level of students (from both writing
tests, entry and exit), years learning English in formal education, and additional
language remedial courses if available, as indicators of proficiency levels, as there
were unfortunately no official records of students’ proficiency levels held in the
department (e.g. TWE or IELTS writing scores).
PART TWO: THE DESIGN AND DEVELOPMENT OF TOOLS
A multi-strategy research was conducted in this study, whereby different data
collection methods were used to gather the necessary data during three different
stages, tools included pre-test and post-test writing tasks, pre- and post-experiment
questionnaires, and interviews with members of the treatment group. The first
questionnaires helped obtain a general idea of students’ perceptions of various
types of feedback, and following stages of data collection enable see to see if
students’ perceptions are likely to change by the end of the experiment. This idea of
what students thought of feedback strategies as well as the introduction of peer
feedback is captured from the subsequent questionnaire and interviews. However,
the writing tasks help track students’ progress and improvement in their writing. This
64
section mainly discusses the theoretical background on which these tools were
developed. The procedures taken to conduct the study and then analyse the results
will be mentioned in a later section.
3.2 Justification for Choosing Data Collection Tools
This project follows a tradition of studies that employed the pre-, post-tests
technique including Lundstorm and Baker (2009), Ellis et al., (2008), Al-Hazmi and
Scholfield (2007), Min (2006) and many others, to compare students’ progress either
within a period of time usually in which an experiment is carried out with or without
different treatment groups.
Semistructured questionnaires were used in the first stage of data collection for the
relatively large number of potential subjects (n=155). However, as the number of
participants in the subsequent stages is considerably smaller, more qualitative
means of collecting data were used including more open-ended questionnaires and
interviews.
3.2.1 Procedures of the Questionnaires
McDonough and McDonough (1997), Clough and Nutbrown (2007), Gillham (2000),
and Cohen et al. (2000) among other experts believe that questionnaires are a very
popular data collection method in educational research. There are numerous factors
that can lead to a researcher choosing questionnaires to collect data from students,
which naturally apply to this research project, including: a) questionnaires tend to be
more reliable as they are anonymous; b) they encourage greater honesty from
65
respondents; c) they save the researcher’s and participants’ time and effort (they are
more economical); and d) they can be used in small-scale and large scale issues
(Seliger & Shohamy, 1989; Cohen et al., 2000; McDonough & McDonough, 1997)
Mertens (1998) also mentions that questionnaires allow the collection of data from a
larger number of people than is generally possible when using quasi-experimental or
experimental design. However, experts also point out that questionnaires also have
some disadvantages. For instance, Mertens (1998) pointed out that questionnaires
rely on individuals’ self-reports of their knowledge, attitudes, or behaviours, thus the
validity of information is contingent on the honesty and perspective of the
respondent. Cohen et al. (2000) also believe that questionnaires might have the
following disadvantages: a) the percentage of returns is often too low; b) if only
closed items are used they may lack coverage or authenticity; c) if only open items
are used, respondents may be unwilling to write their answers.
It is therefore very important for researchers to strike a balance between the
advantages and disadvantages. In order to minimize these disadvantages, the
researcher distributed the questionnaire to the targeted students during one of their
classes, so the return rate was likely to be higher than if it was distributed by mail. To
address the lack of coverage and authenticity associated with closed questions,
there was a secondary interview with some selected students, with less-structured
questions and further opportunities to elaborate on answers to items in the
questionnaire. This was expected to minimise any undesired negative effects
including lack of coverage. Other suggestions were taken from Cohen et al. (2000:
129), who suggested that the researcher needs to pilot questionnaires and refine
66
their content, wording, and length accordingly, and to make it appropriate to the
targeted sample (the students), as shall be seen below.
The Design and Development Stage: Points to Consider
Generally speaking, there are some considerations involved in the process of
developing any data collection method. Mertens (1998) mentions the following steps
to develop a data collection instrument:
1. Define the objectives of the instrument.
2. Identify the intended respondents.
3. Review existing measures.
4. Develop an item pool, i.e. resources for draft items, new measurement
devices, adapting existing tools and/or adopting tools.
It is also very important to think of an appropriate title for the instrument, because
this is the first thing a respondent will see, especially if the instrument is a
questionnaire. Many researchers (e.g. McDonough & McDonough, 1997; Cohen et
al., 2000; Walliman, 2001; Mertens, 1998) have all stressed the importance of having
a cover letter that contains the title and an introductory paragraph attached to the
questionnaire, especially for ones to be distributed by mail, where respondents
usually have little chance to ask the researcher for clarification.
Mertens (1998) and Cohen et al. (2000) also mention that it is equally important to
reassure participants of privacy and confidentiality in the questionnaire, especially
when a survey asks questions of a sensitive nature; such assurances were expressed
clearly in the body of the questionnaires and by the instructors themselves. Other
important considerations include ensuring that the questionnaire is written in a
language easily understandable to the intended respondents, and including
67
instructions on how to complete the questionnaire. The researcher also consulted
other questionnaires from previous studies that investigated similar issues, such as
Race et al. (2004) and Ferris (1995). No items were duplicated, because the
questionnaire was specifically designed for the purpose of this study, but many ideas
were adapted when required. In other words, the questionnaire was designed with
Cohen’s (1987) questionnaire in mind (later used by Ferris, 1995; and Min, 2006) but
the questions used were chosen to fit the purpose of the study.
The survey was conducted in two stages: a) the pre-experiment stage, when
participating ESL student writers were asked about their beliefs, preferences, and
attitudes regarding both traditional teachers’ written feedback, and the relatively
new concept of peer feedback; and b) the post-experiment stage, when students
involved in the experiment group were asked to report their beliefs in writing,
preferences, and attitudes, to find out if the exposure to both techniques in general,
and training to adopt peer feedback in particular influenced their perceptions. The
researcher used Likert scale questions to determine students’ attitudes.
A number of concerns are usually involved with questionnaires that contain items of
attitude scales and self-report measures. Bell (2005), Cohen et al. (2000, 2007), and
Wallace (1998) identify three major problematic aspects usually associated with
questionnaires and interviews. They are:
68
1. Subjectivity:
This basically means ascertaining the truth of the respondents’ reply. The researcher
is therefore advised to spot responses that might have indicated exaggeration,
consciously or unconsciously, such as students claiming they study longer than they
actually do. Brown and Rodgers (2002) refer to the same aspect as ‘prestigious
questions’. The subjectivity of questionnaires and interviews also requires a clear
distinction between ‘opinions’ and ‘truth’, as they are not necessarily
interchangeable notions. However, if teacher respondents all agreed that a course
book is very poor, then this book is unlikely to contribute much to an effective
teaching programme. The researcher needs to be realistic and sensible about
evaluating data presented through questionnaires and interviews. Moreover, the
researcher needs to employ common sense when applying a questionnaire which
can be reflected in items such as quality of the source and possible hidden
motivations, especially in a small-scale action research, when the researcher knows
the subject helping them to evaluate the resulting data well.
2. Sampling:
This problematic aspect deals with the how representative a sample is of a larger
population. Sampling, according to education research experts such as Cohen et al.
(2000 & 2007), Bell (2005), Walliman (2001), and Wallace (1998), is a very complex
process. Comments and guidelines provided by these experts however were strictly
observed when choosing a representative sample for the sake of this study. A simple
random sampling technique was used in the first questionnaire because, to my
knowledge, the research population was homogenous in most aspects, including
69
linguistic background, age group, gender, educational level and proficiency (c.f.
section 3.2.5.3 Validity & Reliability). In the second stage of the research however, a
‘cluster sampling’ procedure was followed, which Walliman (2001) describes as cases
forming clusters by sharing one or more characteristics, the sample is otherwise
homogenous. In the case of PF and control group, the only observed factor that
differentiates the two groups was the type of treatment they received. Other types
of random sampling including systematic sampling; simple and proportional
stratified sampling were disregarded because they were not applicable for the
research population. Non-random sampling techniques were overlooked altogether
because they tend to provide a weak basis for generalisation (Bell, 2005).
3. Intrusiveness:
This is the third problem associated with questionnaires and interviews. These
techniques can be described as intrusive in terms of the time consumed to answer
the question, the unwillingness of respondents to answer questions, stemming from
their belief that their responses will benefit only the researcher and not themselves,
or from the fact that there is no immediate feedback, as in the case with different
types of questionnaires such as ‘rate yourself’. Moreover, questions asked during
interviews are threatening in every aspect, especially in terms of time needed,
possibility of awkward or personal questions, and anxieties resulting from
speculations on how the results will be presented and used. All these concerns are
carefully examined in the ethical considerations section.
70
There are yet more specific issues that have to be avoided in order to produce a
sound non-standardised questionnaire, as mentioned in Brown and Rodgers (2002:
143), which include:
1. Overly-long items
2. Unclear or ambiguous items
3. Negative items
4. Incomplete items
5. Overlapping choices in items
6. Items across two pages
7. Double-barrelled items
8. Loaded word items
9. Absolute word items
10. Leading items
11. Prestige items (exaggeration in response, Wallace, 1998)
12. Embarrassing items
13. Biased items
14. Items at the wrong level of language
15. Items that respondents are incompetent to answer
16. Assuming that everyone has an answer to all items
17. Making respondents answer items that don’t apply
18. Irrelevant items
19. Writing superfluous information into items
The questionnaire that will be used in the first stage of data collection is divided into
three main parts (see appendices C, D and E) The first section asks students general
questions about their age, educational background, courses they have taken and
suchlike. The second section asks more specific questions about teachers’ written
feedback in the form of a tendency scale to measure attitudes. The third section asks
similar questions to the previous section, but with regard to peer feedback. The last
two sections should reveal students’ conceptions of the different types of feedback,
which is the subject of investigation in this research project. As the main purpose of
the questionnaire is to investigate students’ beliefs in writing, most questions are in
Likert scale format which, according to Cohen et al. (2000 & 2007), is helpful in terms
of helping combine the opportunity for a flexible response with the ability to
determine frequencies, correlations, and other forms of quantitative analysis. In
71
other words, these rating scale items offer measurement with opinion, quantity, and
quality, and therefore are very suitable to collect data for this research project.
The Development of the Non-Standardised Questionnaire
Bearing in mind that the questionnaire was intentionally non-standardised, it was
extremely important to achieve certain standards to render it valid. For instance, the
questionnaire had to be fairly easy to use, simple and undemanding, especially in its
electronic format. A questionnaire should also be written in a way that never
intimidates the respondents, neither in linguistic nor in technically complicated
terms. Even if the purpose comes first, the questionnaire should also appear
attractive, easy to read and to follow, and easy to answer. Mertens (1998) and
Cohen et al. (2000 & 2007) recommend survey designers to make them attractive by
using coloured ink, coloured papers, and different type styles. In this project it was
decided that items and pages should also be numbered, a brief instruction should be
included (see appendices A and B), examples should be given before any item that
might be confusing, the questions should be organised in a logical sequence so
related items should be grouped together, beginning with interesting and non-
threatening, factual questions, and the most important questions should not be left
until the end.
All of these features generate achieve user-friendliness, a very important
characteristic of credible questionnaires. The early draft of the questionnaire
underwent numerous editing processes, and was regularly reviewed in the light of
relevant educational research handbooks and references, such as McDonough and
72
McDonough (1997), Wallace (1998), Cohen et al. (2000) and Robert and Rodgers
(2002) including trialling and piloting as explained later. Moreover, the advice of
other researchers currently working in the field of education was sought prior to the
pilot study stage.
The Pre-Pilot Study
This was an important step in the process of developing the questionnaire. The
purpose of the pre-pilot study was basically to consult other well-informed
researchers in the field about the data collection tools to be used. This process is
known in the literature as the pre-pilot or the trialling stage. The opinions and
comments of twelve research students working in the field of education were
gathered via an opinion questionnaire specifically designed for this purpose. The
opinion questionnaire also comes in an electronic MS-Word format, which enabled
me to send it via e-mail to more participants than would be possible using only
conventional means and regardless of their geographical locations. It contains both
closed items along with an unrestricted space for further comments. However, to
help get helpful yet specific responses, prompts addressing three major aspects of
the non-standardised questionnaire were included. These aspects are the layout and
appearance, the nature of the items involved, in terms of both content and type (i.e.
dichotomous, multiple choice, scale questions etc.), and the time needed for
completion. The guidelines and points to consider mentioned by Brown and Rodgers
(2002) were also included. (See appendix A)
73
The pre-pilot study has revealed some interesting findings about both contents and
the appearance of the questionnaire. For instance, three of the subjects located
some minor errors in terms of grammar, organisation, and/or typography, which
were all rectified accordingly. Almost half of the subjects had had concerns about
some of the questions asked, and their main concern was that these questions did
not necessarily apply to the targeted respondents, and therefore cannot be
answered. As a result, these questions were rephrased to avoid asking for
information respondents could not be expected to have. A similar number of
subjects believed that the researcher should have included more questions,
especially ones about students’ past experiences with teachers. In fact, the
researcher intentionally left a margin for students’ further comments, but it seems
that students could use some prompts to comment on their past experiences, which
were included in the edited version of the questionnaire. Most of the researchers
also believed that it would be a good idea to have the questionnaire in Arabic
instead (i.e. L1 of the target research population). An Arabic version of the
questionnaire, according to one of the researchers, would be more convenient for
those students whose English proficiency might be lower than others, and for
freshmen if they will be included.
The researcher was particularly concerned about the time factor. Poor time
management results in surveys that take a very long time to complete, which are
thus very likely to deter respondents from completing them, lead to them being
filled in hastily and inaccurately (Cohen et al., 2000; Metens, 1998; Brown &
Rodgers, 2002; McDonough & McDonough, 1997). The researcher initially sets a
74
maximum time for completion of around 30 minutes. Although most of the
participants in the pre-pilot study took between 20 to 30 minutes to complete the
questionnaire, the researcher was more interested to know why three of them took
more than the maximum of 30 minutes. In fact, one indicated that it took him more
than an hour to complete the whole questionnaire in an appropriate manner. His
main criticism was against open-ended questions as, according to him, writing a text
as an answer is very time-consuming. The researcher therefore decided to keep
these questions, but only as optional, so that respondents do not have to answer
them all (see appendices 3, 4 and 5).
The Pilot Study
This was the last stage of developing the non-standardised questionnaire. Mertens
(1998: 117) explains how piloting a questionnaire functions as “you try it out with a
small sample to your intended group of respondents.” Piloting in many aspects is
very similar to trialing, and a close inspection will reveal that both have the ultimate
purpose of getting feedback that helps produce a better data collection tool. The
main difference however lies in the source of feedback each is likely to produce as in
the pre-piloting stage more experienced participants were the ones offering their
views, while in the piloting stage participants who are likely to represent the
research population are the ones offering doing so and practically getting involved in
a study very similar to the actual one. Bell (2007), Cohen et al. (2000, 2007) and
Mertens (1998) mention that piloting data collecting tools is a very important step
towards validating any data collection tool and has many advantages. They mention
that everything about a questionnaire should be piloted; nothing should be
75
excluded, not even the typeface or the quality of the paper. Piloting increases the
reliability, validity and practicality of the questionnaire. Additionally, piloting a
questionnaire serves many functions including:
§ To check the clarity of the questionnaire items.
§ To gain feedback on the validity of the questionnaire items.
§ To eliminate ambiguities or difficulties in wording.
§ To gain feedback on the type (i.e. rating scale, multiple choice … etc) of question
and its format.
§ To gain feedback on the attractiveness and appearance of the questionnaire.
§ To gain feedback on the layout, sectionalizing, numbering and itemizing of the
questionnaire.
§ To check the time taken to complete the questionnaire.
§ To check whether the questions are too long or too short
§ To identify redundant questions.
§ To identify commonly misunderstood or non-completed items.
Some procedures were identified to properly conduct the pilot study. They will be
mentioned according to their chronological order.
1. Identifying a representative sample
Because the initial pilot study was set to take place in the UK, identifying a
representative sample of ESL students was a crucial step to ensure that they
resemble the target population in English Department, KAAU. The variables that
needed to be controlled were gender, age, level of education and linguistic
proficiency. However, there was one main factor that might affect results which was
that these students were studying in the UK hence in a different learning context.
They therefore were very likely to be exposed to different teaching styles and
approaches than they would be in their original country. In order to minimise any
unwanted influences, these students were asked to reflect on their experiences back
in Saudi Arabia rather than theirs in the UK.
2. Communications and contacts
76
The researcher had to use all possible means to approach as many students as
possible. These means included personal contacts, formal communications and
correspondences. Despite extensive communications and correspondences, the
number of available potential students who were willing to participate at the time
the study was conducted was relatively small. Nevertheless, the researcher believes
that the available number was sufficient for the pilot study to proceed in both
questionnaires and interviews. The following table (2.2) shows information about
the participating students including information such as their number, age, level of
education and, when available, their linguistic proficiency test results. The table also
shows complementary information including how long have they been studying in
the UK and how long are they planning to stay more along with information about
their academic majors and the institutions where they will be pursuing their degrees.
Age
Last Degree Obtained/
Institute
Current
Language
Institute
Length of
Stay in the
UK/ Planning
to Stay
(Months)
Degree
Pursued/ Major
U
n
iv
e
rsity
IELTS/
TOEFL
Score
Writing
Score: IELTS
(or) TWE
Score
19
High School/ Private
Secondary School,
Makkah, SA.
INTO
Newcastle
14/48 BA/ Law Newcastle IELTS 6.5 IELTS 5.5
28
MA/ King Khalid
University, Abha, SA.
Durham
Language
Centre
24/48 PhD/ Pedagogy Durham IELTS 6.0 IELTS 5.5
32 BA/ KAAU, Jeddah, SA.
Hull Summer
School
12/12
MA/
International
Business Law
Hull IELTS 5.5 IELTS 4.5
33
BSc/ Saud University,
Riyadh, SA.
n/a 23/02
MSc/ Chemical
Engineering
Newcastle IELTS 6.5 IELTS 6.0
24
MSc/ Umm Al-Qura
University, Makkah, SA.
INTO
Newcastle
10/14
MSc/
Architecture
Newcastle IELTS 5.0 IELTS 4.0
27
BSc/ Umm Al-Qura
University, Makkah, SA.
Newcastle
University
18/04
MSc/
Mechanical
Engineering
Newcastle IELTS 6.5 IELTS 6.0
22
BSc/ Riyadh College of
Technology
n/a 00/12
MSc/ Nano-
electronics
Liverpool IELTS 6.0 IELTS 5.5
24 BA/ KAAU, Jeddah, SA. n/a 00/60
MA and PhD/
TESOL
Essex TOEFL 603 TWE 4.0
Table (2.2) Factsheet about Participants in the Pilot Study
77
3. Informed consent and briefing
Kent (2000) and Burton (2000) stress that the informed consent of participants is an
important ethical aspect of social research. Kent (2000) mentions that a written
consent form can be used to guarantee the actual consent of participants. As
recommended by research experts, such as Cohen et al. (2000), Kent (2000), and
Mertens (1998), the informed consent of all students involved in the pilot study was
granted given that a strict policy regarding anonymity and privacy was assured.
Additionally, participants were briefed about the stated goals of the research
project, the purpose of the pilot study, and what the researcher expected them to
do. Instructions on how to complete the pilot study were also included, and further
clarifications were provided in their respective sections. (See also section 3.2.5.1)
4. Piloting the Questionnaire
As in the previous pre-pilot study section, there are certain points that interest the
researcher at this stage. Apparently, the main purpose of the pilot study, as
mentioned by many experts, including Cohen et al. (2000, 2007), Bell (2007),
Mertens (1998) and McDonough and McDonough (1997) is to make sure that the
tool designed to collect data is suitable to be used on a larger scale. The smaller pilot
population should be able to spot any inconveniences, vagueness of contents,
and/or any other problems with the data collection method. The pilot study’s
smaller group therefore has to be as representative of the actual research
population as possible. Due to limitations in time and resources, the decision was
made to carry out the pilot study with Saudi language students currently enrolled in
academic institutes or language centres across the UK, to roughly represent ESL
Saudi learners. Fortunately, there are a substantial number of Saudi students
78
studying in Tyne and Wear; many of them are either enrolled in language remedial
courses, or are registered in foundation year programmes prior to their courses, a
fact that makes them in many aspects possible representatives of the actual research
population. Participants in the pilot study were asked specific questions about the
newly-designed version of the questionnaire in Arabic, which has been
recommended in the previous pre-piloting stage. Most questions were regarding
how suitable items are, how long does the questionnaire take to complete, are there
any concepts that require further clarification, and finally if students still have any
further comments and questions. Other visual components of the questionnaire
were also investigated including the electronic layout, the colour scheme, and the
font type and size used.
First of all, the majority of students involved expressed that they have a good
command in computer skills, which is a positive trait when it comes to dealing with
the electronic format of the questionnaire. When students were asked about their
opinion regarding which version they preferred, Arabic or English, the majority
unsurprisingly expressed that the Arabic version was easier to understand and was
hence more convenient. Reasons included saving time and effort, which echoed
opinions mentioned earlier by researchers in the pre-piloting study. Students also
believed that it was easier to follow the questions and comment on some items in
Arabic rather than English.
With regard to the time factor, it seems that most students actually completed the
survey in the target time limit, set at around 30 minutes. Previous amendments,
79
including making open-ended questions that require writing texts optional when
possible, helped reduce the time taken to complete the questionnaire from around
one hour, as reported by a respondent in the trialling study, to a more reasonable
and realistic time target of about half an hour. The decision that open-ended
questions should be kept to a minimum to save respondents’ time and not deter
them from adequately and effectively responding to all items of the questionnaire
was subsequently made.
This small-scale pilot study also revealed some interesting correlations. For example,
it was found that the more skilled the respondent was in computer use, the less time
he required to complete the questionnaire in its electronic format. This association is
very strong, at -0.889, and the results are very significant at a very low margin of
error (0.003). It is important to make sure that students possess the necessary
computer skills prior to the commencement of the actual study in case they opted
for the electronic format of the questionnaire.
Generally speaking, students were also happy with the content of the questionnaire,
i.e. its items and the options of answers provided. They also believed that the
explanations provided for the more technical terms used, such as ‘autonomous
learning’ and ‘writing processes’ were adequate and very helpful. Some students
have actually come across these terms when they were studying applied linguistics,
which made it easier for them to navigate through the survey. No major changes
were required as far as the contents of the questionnaire and additional information
are concerned. Most of the students involved believed that the electronic format of
the questionnaire, the use of tools such as scroll boxes for multiple-question items,
80
text boxes for open-ended questions, and tick boxes for dichotomous items, made it
easier and faster for them to respond to the different items of the questionnaire
effectively and easily. One commented that unlike a traditional pen-and-paper
questionnaire, changing or correcting answers is no problem at all given that the
respondent acquires the basic computational required skills of course. However, it is
important to note that all students involved in the pilot study exhibited proper
knowledge of computer use, an essential requirement to complete the questionnaire
in its electronic format, but it was impossible to say the same about all subjects of
the actual study. Finally, as far as visual aspects are concerned, students involved in
the pilot study approved of the way the survey was presented, including font types
and sizes, colour-schemes, tables, and graphs and supplementary information, hence
no changes were needed.
3.2.2 The Writing Entry and Exit Tests
Writing tests, as already discussed, should help yield essential data required for
analysis into the effectiveness of different feedback techniques. However, many
experts in educational research (e.g. Cohen et al., 2004; Gall et al., 1996) stress the
fact that the use of tests in research raises a number of ethical concerns. For
instance, many researchers have reported that individuals may suffer from anxiety in
testing situations. It is therefore the researcher’s responsibility to elicit participants’
best performance, while minimizing their anxiety if they plan to use a test as part of
the data collection process. This task will be involved in phase 3 of data collection,
and will be discussed in detail in a later section. The evaluated pieces of writing were
new writing tasks instead of text revisions, especially important with the exit test.
81
Both content and grammar errors were addressed, as shown in the following
chapter, results.
3.2.3 Interviews
Interviews were the last stage of data collection and were supposed to supplement
and give an in-depth account of data already generated by the second questionnaire.
Most research manuals mention that interviews and questionnaires are two very
accepted methods for collecting data in educational research, and such extensive
reviews of interviews give a clear idea of how they best function in this situation (e.g.
Gillham, 2000; Cohen et al., 2000 & 2007; Hollway & Jefferson, 2000; Tierney &
Dilley, 2001; Houtkoop-Steenstra, 2004; Denscombe, 2007; Clough & Nutbrown,
2007).
One important step towards developing the questions in the interviews is what
Gillham (2000) calls ‘trialling the interview questions,’ which, despite many
similarities, is different from ‘piloting’, a more advanced and mature level. In fact,
trialling in a way resembles what has been already described in the earlier
questionnaire section as the pre-pilot study, in the sense that both were early stages
in developing data collection methods for the inexperienced researcher. Eventually,
having reviewed all the available interviewing options and the unique needs of this
project, the researcher imagined a scenario of how the interviews would have been
conducted and what issues were to be included. The scenario was shown to two
research students who commented on the prompts, timing, topics and execution.
The interviews subsequently took a semi-structured, one-to-one format to best meet
82
the requirements of the study. Interviews also observed a more inductive logic, as
opposed to deductive logic, whereby theories and cognitive principles would emerge
from the data, or in other words moving from the specific to the general. Research
methods literature suggests that inductive logic is more suitable for arguments
based on experiences or observation as the case here (Gillham, 2000 and Cohen et
al., 2007). This rough representation of the actual interviews then underwent a
piloting scheme similar to the questionnaires with three students from the same
sample in table (2.2), though much less formal.
Having conducted the pilot study and reviewed the literature of interviews in
educational research (Gillham, 2000; Cohen et al., 2000 & 2007; Tierney & Dilley,
2001; Denscombe, 2007), students in the PF group were asked to participate in the
interviews (see section 3.3.1.4 for more details). To observe research ethics, student
interviewees’ informed consent was confirmed using the form shown in appendix (K)
which was taken from Kent (2000).
Reflections on the Interviews
My interviewees were all students and according to Tierney and Dilley (2001),
interviewing students is of great significance to include them and their views into the
learning process. They also predict a change in the way interviews are being
conducted and the type of respondents included in educational research. In fact,
they take the inclusion of students in research as an example of this change because
until early 20
th
century, students’ views were largely ignored.
83
Apparently, before I started interviewing students, I had to consult manuals in
educational research (including Gillham, 2000; Cohen et al., 2000 & 2007; Hollway &
Jefferson, 2000; Tierney & Dilley, 2001; Houtkoop-Steenstra, 2004; Denscombe,
2007; Clough & Nutbrown, 2007), to review various types of interviews and to figure
out the best possible option of interviewing participants of this study. Important
procedures including trialling interview questions and considering prompts, timing
and topics to be discussed were also part of the preparation stage. (See section
3.2.3)
As far as the experience itself is concerned, I must admit that this was not an entirely
new experience because I carried out a smaller-scale study involving interviewing
participants some five years earlier in the same institution. Nevertheless, as research
experts stress, each interview is different and the ones I had to conduct for this
study were no exception. Careful preparation plays an important role when it comes
to the successfulness of the event but I was also aware that interviewing skills such
as the ability to prompt questions and to control the discussion in a smooth and
timely manner are equally important traits of any interviewer. Being a novice
interviewer myself, I acknowledge that these skills in no small part come with
experience rather than reading and training, and I therefore believe there is still
some margin for me to improve my interviewing skills.
Most interviews were within the boundaries of what I have expected beforehand in
terms of topics discussed and time allocated. However, one particular subject that
kept emerging was that of students’ level in English which was not what I was mainly
84
trying to focus on at the time. Nevertheless, in case a student wanted to raise this
issue, I had the moral obligation to listen to him and record his thoughts. I even
notified students’ views of this matter in the study when possible.
In all, I have learned how to respect the ethics of educational research including
students’ privacy and trying to present their ideas in their words when translating
the interviews. I have also learned how to balance what I – as a researcher – want to
investigate with what issues students want to raise within the available time limit.
Asking prompts, eliciting stories, asking follow-up questions while trying to keep the
interview interesting are important aspects that I might have started to learn but
want to develop further more.
3.2.4 Fieldwork and Empirical Study
Quasi-Experiment: Control and Experimental Groups
Gall et al. (1996) and Cohen et al. (2000, 2007) highlight a number of issues involved
in dealing with the inclusion of an experiment and control group in a study. The
participants are subjected to different treatment conditions and thus should not be
treated equally. The treatment group is likely to receive special training, while the
control group receives either nothing or a conventional programme. In this research
project, the experiment group will be trained to adopt the relatively new peer
feedback technique in their writing sessions, while the control group will receive
normal teaching sessions and feedback from their language teachers. Some
researchers suggest that the control group subjects will be treated unfairly by not
receiving special training, and thus will not benefit from the perceived advantages of
85
the training programme. However, subjects of the control group can benefit from
the perceived advantages of the special training once the data collection stage is
completed.
The Design of the Writing Task
Two issues were addressed when the researcher decided to include writing tests as
data collection tools, which were what topics to choose, and what assessment
procedures to follow. As for the former, it was an easy decision because on both
occasions the topics students were asked to write about were predetermined by the
textbook in hand (see appendix L). For the latter, however, the researcher applied a
number of scientific measures to ensure that the assessment was conducted in a
way that first of all provided the necessary information required in this research
project, and secondly gave a fair and accurate grade to the respondents.
Peer Feedback Group Training
In order to prepare the students for the upcoming task, and also to better qualify
them to actively engage in peer feedback sessions, an extensive induction week was
dedicated to familiarize them with the upcoming peer feedback sessions. More
details about the significance of this procedure and what points to consider have
been discussed in section 2.2.2.6 (introducing peer feedback) in the literature review
chapter. Preparation procedures followed similar examples by Lundstorm and Baker
(2009), Min (2006), Rollinson (2005), Hansen and Liu (2005), and Berg (1999). They
included the tasks of briefing students about collaborative activities, forming groups,
introducing the types of activities and methods to be used, and introducing
86
checklists. Students were also given better access to the researcher than just during
fixed formal office hours (i.e. via e-mails and more office hours during that week), in
case they had queries or other issues before they began peer sessions. Part of the
briefing procedure included informing students about different types of peer
responses, as reported in the literature, which are prescriptive, interpretive, and
collaborative (Min, 2008; Lockhart & Ng, 1995). They were also made aware of
different types of errors they will be dealing with which, in crude terms, are local
issues as compared to global ones. Finally, the attitude of their comments was also
brought to students’ attention, which basically requires balancing praise and
criticism at both ends of the scale.
However, as Lockhart and Ng (1995) maintain, peer training should be a constant
development process, hence the researcher repeatedly encouraged students to raise
any issues via e-mails or face-to-face meetings as they progressed in their writing
class. Students’ performances were closely-monitored, and if issues that could affect
peer response were identified, they were addressed as soon as possible.
3.2.5 Methodological Issues
Research Ethics
Like every scientific research, this research project rigorously follows ethical
considerations throughout its different parts in their entirety. It is especially
important to stick to such considerations when it comes to dealing with human
participants. It is crucial to mention all of these ethical issues, which can all be
grouped under this heading, but in order to make ethical concerns easier to spot,
87
they are presented in the designated sections of the data collection methods, along
with recommended solutions to minimize possible negative effects.
Generally speaking, the data collection methods (questionnaires and interviews), are
always considered as an intrusion into the lives of the respondents in terms of the
time taken to complete the task, the level of sensitivity of the questions, and/or the
possible invasion of privacy (Cohen et al., 2000 & 2007; Denscombe, 2007).
It is very important therefore to assure the privacy and anonymity of participants
involved in the study when possible. Participants should provide their informed
consent before participating in the study, which is what the researcher tried to
adhere to throughout the research.
Formal Procedures to Conduct the Empirical Study
One of the formalities of the research project was to get formal approvals from both
the educational body where the study was conducted, and the sponsor of the
research. From an administrational perspective, the researcher was required to
obtain formal consent from the English Department, KAAU, where the study was
planned to take place before conducting the actual study. The formal procedures
generally take a considerable amount of time, but fortunately the researcher has
contacts in the department who were willing to speed this process. The researcher
also needs the approval of the sponsor which usually goes through similar
complicated formal bureaucratic procedures, in addition to lengthy correspondences
prior to going and conducting the study away from the University.
88
Validity and Reliability
The validity and reliability aspects of any data collection method used are of great
significance to the findings of any scientific research. Moreover, validity and
reliability issues serve as guarantees of the results of the participants’ performances.
Weir (2005) mentions that the educational bodies that provide language-testing
services, such as Cambridge ESOL and Educational Testing Service (ETS) TOEFL have
seriously and constantly addressed the reliability and validity aspects of their tests.
They have also started addressing the legitimacy of the socio-cognitive elements of
validity as much as they devoted attention to other reliability aspects. Weir (ibid:
11) declares that “the provision of any satisfactory evidence of validity is
indisputably necessary for any serious test.” The concept of validity has been of
great concern to language researchers. Messick (1992) and Moss (1992), as
mentioned in Mertens (1998), argue that validity is the most essential consideration
in test evaluation. According to Messick (1992: 742), validity in its broader context
can be defined as “nothing less than an evaluative summary of both the evidence for
and the actual – as well as potential – consequences of score interpretation and
use.” However, the more conventional definition of the validity of an instrument
according to Mertens (1998: 292) is “the extent to which [the instrument] measures
what it was intended to measure.” Additionally, Kelly (1927: 14), cited in Weir
(2005), noted “The problem of validity is that of whether a test really measures what
is purports to measure.” Lado (1961: 321), cited in Weir (2005), similarly comments
“Does a test measure what it is supposed to measure? If it does, it is valid.” It can be
concluded from the previous quotations that validity of data collection methods
depends on the accuracy of their measurements.
89
Content Validity
Meterns (1998: 294) mentions that “Content validity is especially important in
studies that purport to compare two (or more) different curricula, teaching
strategies, or school placements. If all students are taking the same test but all the
students were not exposed to the same information, the test is not equally content
valid for all the groups.” This study actually investigates two different treatments of
ESL students where the control group receives typical teaching while the experiment
group is introduced to modern teaching methods, namely collaborative learning, to
prompt them to produce peer feedback.
Population Validity
Gall et al. (1994) mention that one of the criteria for judging experiments is
population validity. By definition, population validity is “the extent to which the
results of an experiment can be generalized from the sample that participated in it to
a larger group of individuals, that is, a population.” (Galls et al., 1994: 217) The
concept of population validity is closely related to the process of sampling in
different types of quantitative research. In this research project, the researcher
selected the sample randomly to correspond with the defined population for which
the generalization of results is required. The sample should be sufficient in size,
which in turn reduces the probability of having different characteristics from the
population from which it was drawn. The sample error in the case of the first
questionnaire should be very low, and in the case of subsequent tools almost nil,
because all of the participants were included.
90
Rating Written Tests
Scoring procedures for writing assessments followed recommendations by Weigle
(2002), an analytic assessment-based rating procedure used by Lundstorm and Baker
(2009), and the grading rubric used by Paulus (1999), to ensure the reliability and
validity of the rating practice. That includes defining the rating scale, and ensuring
raters use the scale appropriately and consistently. Rating followed an ‘analytic
scoring approach’ which, compared to the other two approaches commonly referred
to in the literature (‘primary trait scoring’ and ‘holistic scale’), look at the scripts
from a range of features including, in my case, content, organisation, cohesion,
vocabulary, grammar, and mechanics, in addition to the final overall score. In terms
of reliability, Wiegle (2002) mentions that an analytic scale is more reliable than the
holistic scale. Additionally, this type of assessment is more suitable for L2 writers, as
different writing abilities develop at different rates. On the negative side, an
analytical approach is usually more time-consuming and expensive, but in my case it
was possible to implement this measure primarily because of the small number of
participating papers involved. Even with a higher number of papers, modern
electronic programmes that quantify and categorise different errors would ease the
performance of an analytic scale rating.
As a reliability measure, all essays were graded by two experienced raters, the
researcher and another writing teacher in the department, and the different overall
scores were then averaged if possible. In most cases, the difference in the scores did
not exceed one point, and in the few cases where the difference was greater than
91
one point, the two raters discussed the disputed aspects for giving a particular grade
before agreeing on one.
Triangulation
Many experts in education research, including Cohen et al. (2000, 2007), Clough and
Nutbrown (2007), Weir (2005), and Gillham (2000) regard triangulation as an
important step towards validating the results of a study. In this study,
methodological triangulation was assured by having a number of different
quantitative and qualitative data collection methods. As has been mentioned,
triangulation helps minimise the drawbacks of employing single-method research.
Findings from different methods mutually reinforce each other. In the case of this
research project, methodological triangulation was achieved by using different data
collection methods: quantitative in the case of pre- and post-tests and the
questionnaires; and qualitative as far as interviews and open-ended items of the
questionnaires were concerned.
PART THREE: DATA COLLECTION AND DATA ANALYSIS
3.3.1 Data Collection Procedures
In this section, the procedures performed at every stage of the data collection
process are briefly described. This is followed by a description of the methods and
tools used to analyse the data. The following graph gives a visual idea of who were
involved at which stage followed by more specific sections on each stage.
92
Stage One Stage Two Stage Three
Graph (3.1) Data Collection Tools and Stages
The researcher sought the cooperation of the English department in a Saudi
university, particularly from instructors who teach writing courses in it. All students
registered in all writing classes were contacted via their respective instructors in the
first questionnaire and were asked for their voluntary participation in the study.
Students were assured that the information they provide would be made available
only to the researcher and for the purpose of the study. As for the experiment,
students who registered in the course LANE216 were divided into two groups. There
has been no influence of the teacher as to which group a student chose, i.e. students
chose their sections according to their preference of the time each class starts. Out
of the 35 total registered students, 16 chose section AA (which later became the
experiment group) and the remaining 17 chose section AB (the control group). Some
Pre-Experiment
Questionnaire
n=73
Experiment Group
n=12
Entry & Exit Tests
Post-
Experiment
Questionnaire
n=12
Interviews
n=6
Control Group
n=14
Entry & Exit Tests
93
students from both groups eventually dropped the course so section AA ended up
with 11 while 14 completed the course in the other section.
Students in the experiment group received feedback from two sources which were
the teacher and their peers. There were six peer feedback sessions in total ranging
between 20 – 30 minutes each. Students were divided into groups of four and
members of each group were assigned by the teacher. The nomination of groups’
members were mainly driven by students’ levels in writing or in other words, each
group consisted of students of various writing abilities. Their level in writing was
determined by both their scores in the entry writing test and their marks in previous
writing courses. Members of the groups played different roles at different sessions.
In each session, two students wrote texts while the other two provided their
comments to their peers’ writing after discussing the each text as a group. In the
next session, the two who provided feedback did the writing and the procedure was
similar to that of the previous session. Most sessions last between 20 – 30 minutes
including time required to write the short texts.
In every session, the teacher handed out checklists to the students whose role was
to provide feedback. Filling out the checklist was not a requirement and no marks
were assigned to this task but students nevertheless were encouraged to follow the
guidelines in order to keep their comments consistent with what is expected from
the course. The checklist also provided evaluators with a platform on which they can
justify their decisions about their peers’ writing. The checklists also provided a
material of discussion for the groups. Both local and global errors were looked at in
94
every session although students reported that they focused more on linguistic errors
whether they give or receive feedback.
The exit test of both groups was the product of individual work and students did not
receive feedback from their peers nor their instructors. This was a marked task and
students were aware of this. The second questionnaire was more open-ended
compared to the first and involved all registered students in the experiment group.
Students were urged to reflect on their own ESL writing and to give honest opinions.
The subsequent interviews were individual, one-to-one that lasted between 20 –
30minutes each. They were all conducted shortly after the exit test and included
students from the experiment group. To make the interviews as natural as relaxing
as possible, they were carried out in Arabic (see section 3.2.3). Presenting the
interview data was one particular area of interest especially with absence of advice
on what to do in the case of translated scripts as in this study. I therefore decided to
conduct the interviews in the language students preferred, i.e. Arabic, record them,
translate them, and then show the scripts to another teacher along with the
audiotapes to verify the accuracy of the translation. I also sent the translated scripts
to the respective students via e-mails. My role as a teacher-researcher could have
influenced my interpretation of the data it must e said which is why I tried to seek an
alternative view from another teacher in the department when I was assessing
students’ writing and when I translated students interviews scripts. Student
interviewees were also contacted via e-mails with my interpretations of their
answers. More detailed sections of each tool follow.
95
Writing Tasks: Entry and Exit Tests
Having acquired permission from the educational authorities, I travelled back to
Saudi Arabia where I taught 60-minute composition classes, which all of the subjects
of the experimental group were part of. These were taught for 3 days a week for
about two months, totaling just over 20 classes. The classes started on the 12
th
January 2008 (the working week begins on Saturdays in Saudi Arabia). In these
classes the students were introduced to peer feedback techniques, as well as the
typical teaching methods they and their counterparts in the control group were
exposed to by default. Students of both the control and the experimental group
were distributed two sections of the same module (code named LANE 216 – Sections
AA and AB). However, to sideline any undesired interference from the class, a
decision has been made not to make the students aware that a research project was
in progress until a later stage of the research, when some of them were interviewed
about their experience. At the start of the project, I was introduced to the students
as their teacher. My duties as a teacher included all the usual teaching workload,
such as planning classroom activities, grading the students’ assessed work, deciding
on which topics to be covered, and for providing feedback. Teaching was frequently
monitored by another teacher in the Department whose role was to continue the job
when I finish my study. The textbook recommended by the Department was
Interactions II Writing, Middle East Edition, which was used with both sections; the
experimental and the control group, during the project.
The pretest was conducted during the first week of the course, when students of
both sections (i.e. AA and AB, n=35), in line with the first chapter of the textbook,
96
were asked to write an argumentative paragraph discussing what makes them
choose a specific university, either locally or overseas (see appendix L). They were
notified that this was not an assessed task but one which aims to identify any writing
problem they might have had. The students were also told that they could consult
their dictionaries and textbooks if they wished but they could not exchange ideas or
consult one another during the test. Students were also given the chance to receive
detailed comments on their paragraphs, either in printed form or via e-mails if they
preferred. The comments covered both form and content issues and another writing
instructor reviewed them before handing them back to the students (see appendix
H). As the entry test was conducted using pen and paper, the researcher typed all of
the participating texts in MS-Word format to enable him to respond to errors more
effectively using colour, underlining and strikethrough, while the auto-correction
function was disabled to preserve the actual writing of students, and to ensure that
every error was accounted for (see sections 4.1.1 and 4.1.2). Taking Weigle’s (2002)
different types of assessment, Cohen’s (1994) list of writing features to be included
in assessment (see table 1.5 in the literature review chapter), and Jacob et al.’s
(1981) ‘ESL Composition Profile’ into consideration, specific types of errors were
identified and were used for assessment purposes, as well as for measuring any
changes between this task and the forthcoming exit test. These factors included
content, rhetorical organization, and organization from a ‘content’ perspective, and
spelling, grammar, punctuation and run-on sentences as far as ‘form’ was
concerned, which provides an ‘analytic scoring approach’ as defined by Weigle
(2002). The content comments provided by the researcher were qualitative in
nature, and hence might be occasionally inconsistent and both me and the other
97
teacher had to reach a decision. In order to minimize any possible interference
caused by bias or subjectivity on the part of the assessor, the other teacher reviewed
and approved the comments I provided. Local errors, on the other hand, were easier
to identify and account for in a quantifiable way.
As for the exit test, students from both groups were told in advance that this was an
assessed writing task that would be part of their overall score. More time was given
to complete the task, i.e. 30 minutes compared to 20 minutes for the entry test. The
question was again taken from the textbook which was again mainly argumentative
(see appendix L). It required students to decide which was better, living in a small
town or in a big city, and it clearly required them to support their argument with
proper examples, reasons, and evidence.
The Treatment of Peer Feedback Group
When the students who registered for LANE 216 had been distributed into two
sections, the researcher randomly chose section AA as the experimental group, while
the other section, AB, was taken by another instructor from the Department, and
was considered to be the control group. It is important to note that the choice of
sections was left to the students themselves and the only difference between the
two sections was the starting time for each class (i.e. students were not chosen
based on their age, proficiency or any other factor that might later affect their
performance). It is also noteworthy to mention that because of the Department’s
policy, students were permitted to drop the course during the first six weeks of the
semester, and some students from both sections did so.
98
It has already been mentioned that the researcher and the instructor of the other
section had to cover the same material and meet the same course objectives,
although how each instructor did that was left to them. This included choosing the
teaching methods and approaches. The core reading recommended by the
Department was Interactions II Writing (see section 4.2.2 ‘The Design of the Writing
Task’), but the choice of any supplementary materials was again left to the
instructor. These were two important factors that the researcher exploited, to
integrate peer feedback within the experimental group.
The peer feedback group (the control group) received special training as a part of the
research project. For example, their peer-reviewed exercises were completed with
the help of Race et al.’s (2004) peer assessment grid. Students were also trained to
provide feedback using a checklist (see appendix G) that was adopted from Miao et
al. (2006), Min (2006), and Peterson (2003). The use of the checklist in the peer
feedback group is a common practice in ESL writing classrooms (Hyland, 2000).
Although some studies have raised questions about the use of checklists in peer
feedback activities (c.f. Al-Hazmi & Scholfield, 2008), arguing that it actually imposes
the teacher’s agenda on the students’ responses, students at lower levels will
certainly need some guidance which, in this case, comes in the form of a checklist.
Pre and Post-Experiment Questionnaires
As already stated, there were two different sets of questionnaires. The first was
distributed to a wider research population of KAAU ESL students. This comprised of
99
155 students, all of whom were attending and/or have attended a writing course. Of
these, 76 replied 3 of which were rejected on reliability grounds. The first
questionnaire was carried out at an early stage of the study and more closed in
nature. The other involved participants from the experimental group (n=14, none of
whom were rejected) and because of the limited number of subjects, more
qualitative open-ended questions were used. The second questionnaire was
conducted towards the end of the experiment. The criteria for choosing subjects for
both questionnaires was straightforward and simple; for the first questionnaire, as
already explained, every student in the English Department who was registered in at
least one specialized ESL writing course was a potential subject, while only
participants from the treatment group were involved in the other questionnaire. The
researcher, with the help of two instructors from the Department, distributed the
first questionnaire in both types: conventional paper-based and electronic format,
whatever the students preferred. Out of the 155 students approached, 73 completed
and handed back the questionnaire, 35 using pen and paper, the remaining 38
students e-mailed them back. The first questionnaire was more comprehensive and
addressed a range of issues mostly related to the subject of the study, teacher and
peer feedback in ESL writing classes. The questionnaire items came in different
forms including the Likert scale, dichotomous and multiple-choice questions. The
questionnaire was non-standardised, structured, and it was in Arabic, mostly to
incorporate the recommendations of other researchers who viewed the early
version of the questionnaire. As the researcher was aware that some concepts were
probably new to the students, especially those who had recently registered on a
100
writing course, detailed definitions and explanations were provided to accompany
the questionnaire in both formats.
As for the second questionnaire, it was concise and focused on the topic of the
research which was about the students’ experience of collaborative writing and peer
feedback. In other words, no additional questions, apart from peer feedback and
teachers’ comments, were included. As noted already, because of the limited
number of participants, more qualitative measures were used by means of more
open-ended questions. The second questionnaire was designed to serve two
purposes: 1) to report on any difference in attitude towards both teachers’ and
students’ peer correction, as compared to the findings of the first questionnaire; and
2) to find out more about students’ experience of incorporating peer feedback and
collaborative writing, and how they performed and responded to each other during
the experiment, an aspect which was further investigated using interviews with
selected representatives from the group.
Treatment Group Interviews
As already stated, the main purpose of the interviews was used in conjunction with
the post-experiment questionnaire to supplement the findings and to provide an in-
depth insight into the data. Qualitative data generated by interviews provides the
depth of understanding questionnaires may lack (Cohen et al., 2000 & 2007; Tierney
& Dilley, 2001). To some extent, these interviews compensate for possible
shortcomings of the questionnaires, mainly due to the fact of not being able to ask
follow-up questions, the interviews were less structured and hence more
101
opportunity to explain and discuss various issues was available. As far as participants
were concerned, representative students were selected from the PF group based on
the results of their exit test. All students were essentially asked similar questions
about the same topics but, bearing in mind the flexibility required in these
interviews. All interviews took place in the Department, and all were conducted
shortly after the exit test and the second questionnaire. Interviews lasted between
15 to 25 minutes in Arabic and then were translated into English. The translation was
double-checked and endorsed by a research student of a similar background, to
eliminate any misrepresentation of the intended meaning in the original interviews.
3.3.2 Data Processing and Analysis
This section reports on the processing of data collected in the study and the analysis
tools used. As with the preceding section, this section is merely descriptive. The
interpretation and inferences of the data are presented in the following chapter.
Writing Tasks
As mentioned in a previous section, following Weigle’s (2002) analytic scoring
approach, the researcher identified specific categories of errors, both local and
global, in order to respond to students’ compositions equally and consistently. The
analysis also considered Cohen’s (1994) list of errors, and Jacob et al.’s (1981) ESL
Composition Profile, and has incorporated a modified ETS CRITERION model of
assessment which uses a six-point holistic score report and diagnostic feedback (see
section 2.2.3.2 in the literature review).
102
Each type of error was assigned a different colour, including missing and redundant
items; ‘square brackets’ and ‘strikethrough’ were used to indicate these items
respectively (see the example below).
[indentation] Small town is the best please to live in. That [is] because you obtien healthy environment, more
secure [security] and you don’t need to use transportation alot. In this easy [essay] I will discuss why is living in
small town is good choise. In my opinion [,] living in [a] small town is the good oprtonity to healthy air. That [is]
because [in] the small twon usualy there [are] no factories or crowed[s] of cars in it. In addition, the small town
usualy [has] all the services is close to you. Therefore[,] you don’t have to use the transportation alot. Moreover,
the small town is more secuor comper [compared] to big twon. For example, Hull twon is more secuor than
London. In conclusion, small twon is the great please to live for many reason[s] [:] healthy environment, more
secuor, and all the servies are close to you any time [anytime] without using the transportation.
Content, Rhetorical Structure and Organisation:
Extended piece of writing that can be shortened if repeated ideas were omitted
Three valid reasons why a small town is a better place to be, but repetition can be omitted
The flow of ideas is good but there are many occasions were unnecessary repetitions are committed
Language Conventions:
Table (2.3) Analyses of a Writing Text
Other variables recorded included word-count and the overall score of texts. As for
the global issues, including content, rhetoric, and organization, the researcher gave
students comments which were endorsed by another experienced ESL writing
teacher, which dealt with these issues. It must be said that the overall grade was not
necessarily an accurate measurement, it rather aimed to reflect the writing quality in
the light of both global and local issues as seen by both raters, although more
attention was focused on the former. The quantitative data of both writing tasks
were processed using SPSS 15.0, and the results that emerged are shown in the
results chapter that shortly follows.
Type of Error Recurrence
Spelling 12
Grammar/Vocabulary 13
Punctuation/Capitalization 4
Run-on Sentences None
Word-count 144
Overall Score
3/6
Acceptable
103
Questionnaires
I used SPSS 15.0 to help analyse and process the data. SPSS should help obtain
percentages, means, associations, and reliability values from a descriptive point of
view, in addition to other quantitative measures including parametric and non-
parametric tests. The unstructured comments by the student subjects were limited
in number (only 10 out of 73 wrote useful comments). However, as the second
questionnaire was more open-ended and qualitative in nature, descriptive values are
less meaningful and they would be used in the discussion chapter as indicators
rather than proofs. I compiled and categorised the qualitative comments of the
second questionnaire to complement the results of the interviews.
Interviews
I used NVivo 7.0 and 8.0 to process and analyse the qualitative data obtained from
the interviews. NVivo is qualitative data analysis computer software which has been
designed for researchers working with text-based information. Nvivo helps organise
the data by speeding up the qualitative data analysis and most importantly the
traceability of the analysis. The programme uses what it calls ‘nodes’ which are
codes the researcher finds significant during the analysis process, a very important
tool when it comes to inductive elements of the data. The following graph shows an
example of how a response by an interviewee fits into a new ‘node’ which in this
case coded as ‘abuse’. I used nvivo in a similar manner with both predetermined
categories and with ones created later using inductive logic.
104
Graph (3.2) Example of nvivo node (From Wadsworth CENGAGE Learning: cengage.com)
As already established, the interviews were designed to supplement and give an in-
depth insight into the results of the second questionnaires. The results of the
interviews were also compared against qualitative results of other tools used (i.e.
content comments from writing tasks and unstructured comments from the
questionnaires) when possible.
As the interviews were intentionally less structured than the preceding
questionnaire, the data gathered was expectedly qualitative in nature and hence
qualitative modes of analysis were used. These measures were identified and
developed by following recommendations of Corbin and Strauss (2008), Clough and
Nutbrown (2007), Cohen et al. (2000 and 2007), Gubrium and Holstien (2001) and
Gillham (2000).
The interviews were conducted and recorded in Arabic for reasons including
convenience and time saving, then translated into English and transcribed. The audio
files and translated scripts were given to another colleague researcher to check and
105
verify the accuracy and consistency of the translation process. The translated text
files were also sent to the interviewees who provided their e-mails, which should
enable them to ensure that their responses were documented as accurately as
possible as an additional validation measure. Having done that, the written scripts
then were uploaded to the qualitative analysis software, NVivo 7.0 and 8.0, to help
coding and categorizing the responses as well as to identify emerging themes (see
appendix J: NVivo Output). According to Corbin and Strauss (2008), coding is the
process of combining the data for the themes, ideas, and categories first, then in the
light of these codes similar passages of text are labelled with the appropriate code
accordingly. Codes can be based on themes, topics, ideas, concepts, terms, phrases
and/or keywords. In this project however, coding the interviews took a more ‘a
priori’ approach, which basically means investigating issues already identified by the
researcher rather than investigating emerging ones, an opposite approach known as
the grounded theory.
3
This decision was made because of two factors: 1) as has been
mentioned earlier, these interviews in essence were a stage following the
questionnaires, whereby interviews act as a complement to the findings of the
latter; and 2) because the number of interviews was relatively small.
As for the objectivist/heuristic code words’ distinction, the analysis was more
heuristic in nature yet recognizes, to a certain extent, the objectivist end of the scale.
This usually means that the code words used in the analysis are primarily signposts
or flags rather than a condensed representation of facts described in data, as Seidel
and Kelle (1995) explained. A more heuristic approach can help recognize the data
3
Grounded theory in social sciences refers to the generation of theory from data. The first step in
grounded theory-driven research is to collect data.
106
and give different views resulting in better opportunities to analysis and inspection.
However, it is important to make a balance between a pure objectivist stance that
requires certain levels of expectations in code words that becomes, in many cases,
such a burden rendering it difficult to achieve, and heuristic code words a stance
which requires some level of confidence in order to become effective. Therefore, an
‘in between’ approach seems the best option.
Having taken all of the above into consideration, a number of codes were identified
prior to the analysis process. They are: 1- approval of peer feedback; 2- concerns
about peer feedback; 3- procedures and construction of the sessions; 4-
recommendations and suggestions for improvement; and 5- attitudes towards
teacher’s comments. Each of these includes a number of sub-categories of related
codes as follows; the first code can be defined as any utterance that suggests a
positive attitude towards the newly-introduced peer feedback sessions. Sub-
categories of the first code include positive effects of peer feedback on ESL writing in
terms of grammatical accuracy and logic, and certain learning and social skills that
can be improved by peer feedback. It also looks into any changes in attitude towards
peer feedback before and after the experiment. The second, on the contrary,
includes all statements that indicate a negative attitude towards the sessions. This
code includes the subcategories of challenges that can obscure the success of peer
feedback experiment, any undesired results of peer feedback on ESL writing and
educational or social skills. The third category looks at the organization of peer
feedback sessions and how they were carried out. Two subcategories were identified
which are: a) the procedures of which sessions followed; and b) the nature of
107
comments provided by peers during these sessions. The fourth category is very much
self-explanatory, and includes suggestions by students for future development which
might come in a way as a response to any possible shortcomings of peer feedback
sessions (i.e. the second code in this analysis). The last category involves all ideas
regarding feedback and instructions provided by the teacher, including the peer
feedback checklist used in related sessions.
108
CHAPTER FOUR: RESULTS
Overview of Chapter Four
This chapter presents the results as emerged from the data collection tools which
are the questionnaires (pre- and post-experiment), the writing tasks (entry and exit
tests), and finally interviews with members of the peer feedback group. No
interpretation of the results is included here as it has been saved for the following
chapter: discussion. A decision has been made to have these two chapters separate
mainly in order to keep a clear distinction between what has been found and how
the findings are related to the study and previous research.
4.1 Writing Tests Results
There were three separate sets of results from the writing tasks, the first of which
included writing texts of the participants from the treatment and control groups,
otherwise known as LANE 216 sections AA and AB respectively, and will be
considered as the entry test for both groups. The second set however included the
writing tasks of the treatment group only and it was carried out shortly after subjects
were involved in the experiment. Finally, the last writing task included the writing
texts of the control group only and it was carried out almost simultaneously as that
of the treatment group. (See Procedures Section in the Methodology Chapter)
4.1.1 Entry Test Results
The entry test results were as follows: The total number of participating texts was 35
distributed between the two groups, 16 for the treatment group and 19 for the
109
control group (some students from both sections dropped the course eventually). On
average, texts were 46 word-long but with a high SD of (15.5) rendering this result as
not very representative. In fact, papers ranged between 29 to 102 word-long which
shows that the texts could be considerably different from the mean value especially
at the longer end of the scale. Nevertheless, despite that discrepancy, most texts
were between 30 and 60 word-long as the histogram graph below demonstrates.
Students were actually expected to write around 150-word long texts (see appendix
L) but it is safe to say that all texts were below this limit. The word length did not
count in the overall score and it served like a guideline rather than a requirement.
Graph (4.1) Histogram Chart of Texts’ Length of the First Task
As far as local issues are concerned, the most commonly occurring type of errors was
grammatical (including subject-verb agreement, tenses, plural –s, and word-choice),
where 204 were recorded (the term grammatical errors was loosely used to contain
errors such as incorrect word-choice, redundant and missing words). That equals
about 5.8 errors per text, though with a high standard deviation of 3.58 reflecting
the fact that many students committed considerably more grammatical errors than
others. For example three texts alone shared a total of forty grammatical errors
rendering the mean value less representative.
12010080604020
F
re
q
u
en
cy
12.5
10.0
7.5
5.0
2.5
0.0
11
3
5
10
13
2
110
N Minimum Maximum Sum Mean Std. Deviation
SPELLING 35 0 12 98 2.80 3.85
GRAMMAR 35 0 15 204 5.83 3.585
PUNCTUATION 35 0 13 109 3.11 2.709
RUN-ON SENTENCES 35 0 4 31 .89 1.078
Table (4.1) Local Errors in the Entry Test (per text)
Other types of errors recorded are (arranged according to the frequency of their
recurrence): punctuation (n=109), spelling (n=98) and run-on sentences (a run-on
sentence is a sentence consisting of two independent clauses joined with no
punctuation or conjunction) (n=31). Once again, the high SD values of all these types
of errors show that texts widely varied in their level of accuracy as table 4.1 above
shows.
I have also adopted a basic measure of errors per 100 words to be used in
combination with average numbers of errors per text for comparison purposes later
in the discussion chapter. The purpose of having such a measure is to have a more
balanced representation of data as would be possible when only errors per text are
used bearing in mind the variance of text lengths. The following table represents the
different types of errors per 100 words in the entry test.
TYPE OF ERROR OCCURRENCE
PER 100 WORDS
GRAMMATICAL 12.8
PUNCTUATION 6.84
SPELLING 6.15
RUN-ON SENTENCES 1.94
TOTAL 27.4
Table (4.2) Errors per 100 Words (Entry Test)
However, as for global issues (rhetoric, organisation and logic), texts were jointly
assessed and commented on by the researcher and another experienced language
111
teacher from the department. The comments were intended to achieve two
purposes; 1) to inform students about the level of their writing and 2) to justify the
overall grade given. (See appendix H: entry test) There were six different grades used
to assess students’ writing which were 1 very poor, 2 poor, 3 acceptable, 4 good, 5
very good, and 6 exceptionally good. For more information on choosing this grading
rubric please refer to section (3.3.2.1) in the methodology chapter. Most texts, using
the criteria set by the researcher and endorsed by the language teacher, were given
marks 2 (n=12) and 3 (n=14). The mean value of the entry test was 2.23 with an
average standard deviation of 0.84.
Graph (4.2) Overall Scores of the Entry Test
As far as qualitative comments are concerned, most students were given a
combination of encouraging comments (praise) with constructive criticism by both
the researcher and the writing teacher (see table ‘2.3’ in the methodology chapter
and appendix H). The reason for the combined use of praise and criticism was largely
because I followed Hyland and Hyland’s (2002) recommendations on feedback
attitudes. However, when a text was really poor, by which I mean it scored 2 or less
in overall, most comments were written to justify this score on one hand and to
show students what areas of their writing that needs improvement on the other (See
54321
F
re
q
u
e
n
c
y
18
16
14
12
10
8
6
4
2
0
1
14
12
8
112
examples ‘2, 5, 11, 15, 16, 27, 28 and 32’ of the appendix H: entry test). When a poor
score was recorded there was usually one or more of the following problems in the
texts: absence of a clear theme/topic sentence, absence or inappropriate use of
transition words, illogical transfer of ideas, irrelevant and inconsistent ideas,
incorrect use of vocabulary/idioms, incomplete sentences, and in some occasions
the higher than usual rate of linguistic errors especially when excessive errors hinder
the transmission of intended ideas.
4.1.2 Results of Exit Test
As far as linguistic aspects of the exit test are concerned, the results show that
members of the PF group wrote 97-word long texts on average with a relatively high
SD of 24.2 due to variations in individual texts. In other words, texts were
considerably different in length ranging between 63 to 144 words per paper.
Students were expected to write between 100 – 150-word long texts, so some texts
might have fallen short in terms of length (see appendix L). This guideline should
have been made a requirement in order to make students stick to it, possibly by
making text length a contributor to the overall score.
N Minimum Maximum Mean Std. Deviation
WORD-COUNT 11 63 144 97.45 24.246
SPELLING ERRORS 11 0 13 2.27 3.797
GRAMMATICAL ERRORS 11 0 14 5.64 5.334
PUNCTUATION ERRORS 11 0 4 .91 1.375
RUN-ON SENTENCES 11 0 1 .09 .302
OVERALL SCORE 11 3 5 4.00 .775
Table (4.3) PF Group Local Errors (per text)
The linguistic (local) errors recorded according to their repetition per paper were;
grammatical (5.6), spelling (2.2), punctuation (0.9) and almost no run-on sentences. It is
113
noteworthy to mention that the minimum number of every type of error is ‘nil’ as
the table above shows which in other words means that many papers did not
actually commit certain types of errors at all. To be more precise, 10, 6, and 4 papers
did not contain run-on sentences, punctuation and spelling errors respectively. The
average overall grade the PF group achieved was 4 (out of 6) with an SD of 0.77
which shows that the result is somehow more consistent than that of the other
group (as shall be seen shortly). In fact, the majority of papers got an overall grade of
either 4 out of 6 (n=5) or 5 (n=3).
TYPE OF ERROR OCCURRENCE
PER 100 WORDS
GRAMMATICAL 5.78
PUNCTUATION 0.93
SPELLING 2.33
RUN-ON SENTENCES 0.09
TOTAL 9.13
Table (4.4) Errors per 100 Words (PF Group Exit Test)
The other measure used, errors per 100 words, tells a similar story as of which errors
are more prevalent. Again, grammatical errors were the most commonly recorded,
roughly at around 6 errors in every 100 words. Apart from that, the remaining types
of errors occurred at much lower frequency rates as table (4.4) above shows. The
average number of all different types of errors for the PF group exit test stands at a
total of just over 9 per 100 words.
N Minimum Maximum Mean Std. Deviation
WORD-COUNT 14 81 150 109.22 23.481
SPELLING ERRORS 14 1 7 3.29 2.367
GRAMMATICAL ERRORS 14 4 25 9.43 7.165
PUNCTUATION ERRORS 14 1 14 4.71 4.921
RUN-ON SENTENCES 14 0 1 .14 .3633
OVERALL SCORE 14 2 5 3.64 1.082
Table (4.5) Control Group Local Errors (per text)
114
By inspecting the same language issues as of the previous group, members of the
control group on average wrote 109-word long texts in their exit test writing task.
Texts ranged between 81 to 150 word-long with a lower SD of 23, compared to that
of the PF group, which means the dispersion of results is lesser. The most common
types of errors arranged according to their average per passage are: grammatical
errors (9), punctuation (5), spelling (2) and run-on sentences (insignificant).
TYPE OF ERROR OCCURRENCE
PER 100 WORDS
GRAMMATICAL 8.61
PUNCTUATION 4.30
SPELLING 3.00
RUN-ON SENTENCES 0.09
TOTAL 16.01
Table (4.6) Errors per 100 Words (Control Group Exit Test)
Their overall grade averaged 3.64 (out of 6) with an SD of 1.08 meaning the
distribution of grades was higher than their counterparts of the PF group. It must be
noted that as far as grammatical errors are concerned, two passages share 47 errors
between them which partially explain the relatively high value of SD. Another
measure taken to compare the performance of both groups was ‘clause complexity
analysis’ which can be found in appendix (M). The findings were as follow:
Clause Relation Paratactic Hypotactic
Elaboration 7 [0.63] 0
Extension 2 [0.11] 0
Enhancement 3 [0.27] 10 [0.91]
Table (4.7) Number of Clause Relations in Texts by Treatment Group (n = 11) [per text]
Clause Relation Paratactic Hypotactic
Elaboration 7 [0.50] 0
Extension 15 [1.1] 1 [0.1]
Enhancement 4 [0.29] 12 [0.86]*
Table (4.8) Number of Clause Relations in Texts by Control Group (n = 14) [per text]
*One text contains three Hypotactic Enhancement relations
115
4.2 Questionnaire Results
4.2.1 The Pre Experiment Questionnaire
Using SPSS, the following results were obtained:
Frequency Percent Valid Percent Cumulative
Valid NOT SURE 25 34.2 34.2 34.2
IMPORTANT 26 35.6 35.6 69.9
ALWAYS IMPORTANT 22 30.1 30.1 100.0
TOTAL 73 100.0 100.0
Table (4.9) Students Beliefs of Teachers’ Comments
When student were asked about how important they thought the comments
provided by their teachers in general, the results were as follows: None of them
described the comments as unimportant, 65.7% mentioned that they either thought
that the comments were either important or very important with a mean of 3.96 and
a standard deviation of 0.8 (scores have been given to answers where 5 is for ‘always
important’ and 1 for ‘very unimportant’) as the following table and graph below
demonstrate.
Graph (4.3) Students Beliefs regarding the Importance of Teachers’ Comments
Students were also asked about how useful they thought peer feedback was (which
was different from being important in the sense that the former asks about the
ALWAYS IMPORTANTIMPORTANTNITHER IMPORTANT NOR
UNIMPORTANT
F
re
q
u
e
n
c
y
30
20
10
0
22
26
25
116
general concept of TF while the latter looks into the issue from practical point of
view).
Frequency Percent Valid Percent Cumulative
Valid very useless 11 15.1 15.7 15.7
useless 16 21.9 22.9 38.6
not sure 25 34.2 35.7 74.3
useful 15 20.5 21.4 95.7
very useful 3 4.1 4.3 100.0
Total 70 95.9 100.0
Missing 3 4.1
Total 73 100.0
Table (4.10) Students Beliefs regarding Usefulness of Autonomous Learning
Their responses to this question were more diverse than those for the previous
question as 38.6% believed it to be either useless or very useless in comparison to
24.6% who believed that peer feedback was useful or very useful. However, 34.2% of
the respondents did not have an opinion. The mean value was 2.76 with a relatively
high standard deviation of 1.09. The following table and graph demonstrate their
results. Only three students did not answer this question (shown on the table as
‘missing’) which means the remaining 70 students responded.
Graph (4.4) Students Beliefs Regarding Autonomous Learning
When students were asked about their beliefs regarding two unconventional
learning techniques which were ‘autonomous learning’ and ‘peer feedback’, their
VERY USEFULUSEFULNIETHER USELSS
NOR USEFUL
USELESSVERY USELESS
F
re
q
u
e
n
c
y
25
20
15
10
5
0
3
15
25
16
11
117
responses were similar in terms of not having an opinion about them as 27 and 28
students were not sure about their usefulness respectively. However, a very small
number found peer feedback useful or very useful (10% in total as shown in graph
4.5 below) compared to a slightly higher percentage (18%) when it comes to
autonomous learning.
Graph (4.5) Perception of Peer Feedback
In the few occasions when effects of different factors on other variables were
possible, nonparametric tests more specifically chi-square (χ
2
) were used instead of
parametric measures because the questions concerned did not test or measure the
subjects, as compared to data in the form of scores or measurements when
parametric tests would have been more appropriate. Another reason for avoiding
parametric tests is the fact that they are more likely to generate type I error than
with nonparametric tests especially when using the former with data that do not
meet parametric assumptions. (Kranzler, 2007) Accordingly, a number of association
tests were carried out to measure the effects of different factors on students’
perceptions of both teachers’ and peer feedback but no significant results were
obtained as can be seen later. For example, I tried to find out if students who had
VERY USEFULUSEFULNOT SUREUSELESSVERY USELESS
F
re
q
u
e
n
c
y
30
20
10
0
2
8
28
20
13
118
passed more ESL writing courses perceived peer feedback differently. The cross
tabulation revealed the following:
Table (4.11) Number of Previous Writing Courses*Students’ Beliefs (crosstabulation)
Value df Asymp. Sig. (2-sided)
Pearson Chi-Square 16.481(a) 12 .170
Likelihood Ratio 19.317 12 .081
Linear-by-Linear Association .092 1 .762
N of Valid Cases 71
(a) 15 cells (75.0%) have expected count less than 5. The minimum expected count is .34.
Table (4.12) Chi-square results of table (4.9)
The chi-square results should be treated cautiously due to the presence of 16 cells
with an expected count of less than 5 and the high p-value of 0.17. Unfortunately,
attempts to recode the variables so options (1, 2), and (4, 5) are to be merged
respectively to indicate ‘useful’ and ‘useless’ instead did not successfully remove all
the defected cells. The high score of the chi-square test indicates that the null
hypothesis is false but since the p> 0.05 then we cannot reject the null hypothesis
(H0)
Students beliefs regarding the usefulness of PF and CL Total
VERY
USELESS
USELESS
NIETHER USELSS
NOR USEFUL
USEFUL VERY USEFUL
S
u
c
c
e
ssfu
lly
c
o
m
p
le
te
d
E
S
L
w
ritin
g
c
o
u
rse
s
None
count 2 1 4 1 0 8
expected count 1.5 2.3 3.2 .9 .2 8.0
One course
count 8 8 17 2 0 35
expected count 6.4 9.9 13.8 3.9 1.0 35.0
Two courses
count 1 8 3 3 1 16
expected count 2.9 4.5 6.3 1.8 .5 16.0
More than two
courses
count 2 4 4 1 1 12
expected count 2.2 3.4 4.7 1.4 .3 12.0
Total
count 13 20 28 7 2 71
expected count 13.0 20.0 28.0 7.0 2.0 71.0
119
Graph (4.6) Students Preference of Feedback Attitudes
Similar nonparametric association tests were carried out to measure the effect of
variables such as ‘level in the university’ χ
2
=14.7, p=0.55, ‘age’ χ
2
=21.05, p=0.63, ‘the
first choice of major in the university’ χ
2
=3.35, p=0.50, on students’ perception of
peer feedback. As the case with the previous chi-square result, the null hypotheses
(H0) in all these tests cannot be rejected due to the high p-value. However, the actual
count of students whose first choice was English major did exceed the expected
values which means that they hold a more positive attitude towards peer feedback
and those whose first choice was not English did exactly the opposite, i.e. their
actual count in the negative side exceeded their expected results.
Original Results Value df Asymp. Sig. (2-sided)
Pearson Chi-Square 1.134(a) 4 .889
Likelihood Ratio 1.142 4 .888
Linear-by-Linear
Association
.003 1 .955
N of Valid Cases 71
a 4 cells (40.0%) have expected count less than 5. The minimum expected count is .99.
Recoded Results Value df Asymp. Sig. (2-sided)
Pearson Chi-Square .988(a) 2 .610
Likelihood Ratio .992 2 .609
Linear-by-Linear
Association
.013 1 .910
N of Valid Cases 71
a 1 cells (16.7%) have expected count less than 5. The minimum expected count is 4.93.
Tables (4.13, 4.14) Chi-Square Unfamiliarity with PF * their Perception (Before and after recoding)
BOTH PRAISE AND CRITICISMCRITICISMPRAISE AND
ENCOURAGEMENT
P
e
rc
e
n
t
50
40
30
20
10
0
45.71%
42.86%
11.43%
120
The chi-square results here show that the null hypothesis cannot be rejected on both
occasions (i.e. before and after recoding the data) because, first of all, the p-value is
extremely high (0.95 and 0.91 respectively) and secondly because at least one cell of
an expected value of less than 5 remains even after attempts to remove defected
cells by recoding options (1, 2) and (3, 4) to indicate ‘useless and ‘useful’
respectively. The χ
2
value itself is not of great significance anyway (1.1 and 0.98) so
we cannot assume that students’ unfamiliarity with peer feedback has affected their
perception of it here.
4.2.2 The Post-Experiment Questionnaire (Peer Feedback Group)
This questionnaire was much less comprehensive and involved a considerably lesser
number of subjects compared to the previous one. It was designed in conjunction
with the interviews to measure any change of attitudes towards specific feedback
techniques since the previous questionnaire, and in the other part to avoid
redundancy, because the questions that did not bear meaningful comparisons and
therefore could stand by their own have already been looked at either in the first
questionnaire or via other data collection tools such as the entry/exit tests and the
interviews (see the methodology chapter). It must be noted that due to the small
number of participants in this stage, statistical results should be treated as indicators
rather than solid facts and will be used alongside the qualitative results for
comparison purposes in the following discussion chapter.
121
The qualitative data gathered from the post-experiment questionnaire indicate that
many students were still unsure about the usefulness of peer feedback even after
they had been trained and involved in peer sessions. However, a more substantial
number of students also believed that peer feedback is now useful or even very
useful. Students also reported that most of the comments they received from their
peers addressed local issues (grammar, spelling, punctuation) and a fewer number
received a combination of both local and global feedback, by global I refer to wider
issues in writing such as logic, ideas and the likes as classified by Ferris and Hedgcock
(1998).
However, when students were asked about how they responded to their peers’
errors, local issues were of great concern to them as all students claimed that they
have looked at them at one point or another. Global concerns on the other hand
were of less importance to students as almost all students said that they paid little
attention to them when responding to their peers’ writing with only one student
who thought he paid attention.
4.3 Results of the Interviews
This brief section looks mainly at the qualitative results of the interviews as
generated using NVivio 7 and 8. The results at this stage tell very little apart from the
categorization and coding procedures which have been discussed in details in the
methodology chapter. However, meaningful interpretations of the results should be
saved for the following chapter: discussion. Following a rough order of the categories
identified and based on the special arrangements of the software applied to help
122
analyse data, the early results of the interviews were as follows: 12 references had
been recorded which suggested improved learning and social skills. A further 19
indicated positive attitudes towards peers’ comments. On the contrary, 9 references
recorded indicated difficulties in implementing peer feedback and four more suggest
undesired results of peer feedback. Other responses of interest recorded included
these related to how peer feedback sessions were carried out (16) and the type and
attitude of comments in peer feedback sessions (9). As for teacher’s feedback (to be
compared to peer feedback), the responses indicating approval (5) and disapproval
(3) have been coded. (See appendix J: NVivo results)
123
CHAPTER FIVE: DISCUSSION
Overview of Chapter Five
This is the last main chapter of this project. Having read the literature, revised the
methodology, collected and analysed the data, and finally ascertained the results,
attention will now turn to the interpretation of the data, and connecting these
findings to those of previous studies in this field. The research questions will be
progressively addressed in the process, and hence sensible recommendations for
both ESL teaching and future research will be established, both tasks would have
been carried out in a proper manner. Chapters four and five are closely connected
and there will be many references throughout this chapter to the previous one as
interpretations of the raw results emerge.
In general, most of the findings of the study are in line with those of the majority of
similar studies in almost every aspect investigated. The results show that, as far as
feedback in general is concerned, more feedback and training in writing sessions was
beneficial to the students regardless of their source, whether teachers or peers, and
by using either conventional or innovative measures. In fact, this particular result
supports the stance of Ferris (1999, 2003, & 2007), Ashwell (2000), Chandler (2003),
and many others who support the idea advocated mainly by Truscott (1996 & 2004),
that correction should be avoided because it is useless, if not counter-productive. It
was also found that controlled peer feedback did help students write better, in terms
of grammar and content, along with developing many essential social and cognitive
skills, including more classroom participation, actively engaging in communicative
language exercises, responding to others’ texts in a controlled and useful manner,
124
the ability to argue and defend ideas, and last but not least, the ability to address a
particular audience, in comparison to the outcome of the other group, which relied
only on teachers’ comments. These findings are in accordance with studies such as
Min (2008), Rollinson (2005), Storch (2004), Saito and Fujita (2004), Hinkel (2004),
Ferris (2003), Yarrow and Topping (2001), Hyland (2000), Reid (2000), and Ferris and
Hedgcock (1998)
5.1 Students’ Perception on Different Types of Feedback
This part looks into the first of the research sub-questions (see section 3.1.2). Before
we proceed to discussing the question of the student’s beliefs about this learning
process, it should be noted that there is an overlap between this and the last of the
research questions (c.f. section 5.4), which is due to the fact that both questions look
at students’ beliefs regarding different feedback techniques at some stage. However
it was necessary to separate them, as this question looks into the preferences of ESL
students regarding feedback in general, the rate of feedback they receive, the
attitude of criticism or comments they prefer, the areas of writing they want
feedback to focus on, and the directness of corrections, not merely teacher and peer
feedback as is the case in the other section. Moreover, at this stage I am also
interested in students’ initial beliefs concerning the different feedback techniques,
to see if such beliefs could have affected their performances in the following stages,
hence the results of both questionnaires might be required. However, if these beliefs
changed, as students in the treatment group were exposed to peer feedback training
and engaged in PF sessions, this would be the focal point of the other section.
125
The investigations conducted to answer the first of the four sub-questions went
through two different stages; the first targeted all students who took or were about
to take a writing course in the department, which would be the main source of
information in this part; the second stage included only those who were members of
the PF group, and would be analysed minimally at this point. Data was collected
using a combination of closed and open-ended questions, more quantitative items in
the first questionnaire and more open-ended, qualitative questions in the second.
The approach used to gather data was mainly quantitative in the first occasion given
the relatively large number of students approached in the first stage. Yet the
qualitative aspect of their responses was still available achieved by the presence of
open-ended items. Having collected the necessary data, the descriptive data of
different preferences and beliefs, and eventually comparisons between the two
stages, were processed and analysed.
The descriptive statistics of the first questionnaire show that as far as attitudes
towards teacher comments were concerned, the majority of students preferred a
combination of ‘constructive criticism’ and ‘praise’, or simply ‘constructive criticism’
alone, rather than mere praise in a formative assessment (as compared to
summative assessment
4
, when students expectedly preferred more praise and
encouragement, n=26 in the latter as compared to only 8 in the former). In fact, only
11.4% of students preferred their work to be merely praised by their teachers (see
graph 3.6), a result which could be affected by the age factor, as all of the students
4
Formative assessment generally refers to comments given while students are revising their texts with the
purpose of improving and accelerating learning. (Sadler, 1998) Summative assessment on the other hand refers
to comments on the final version of students’ texts and refers to only failure or success, or how students
compare with their peers. (Nicol & Macfarlane-Dick, 2006)
126
involved were mature university-level students. It can be argued that because of
students’ level and age, the majority were willing to accept criticism as long as they
were convinced that this was going to help them become better writers. In other
words, they were more concerned about possible points of weaknesses so they can
work on them, than with what they were already good at. However, no direct
comparisons were immediately possible, by which I mean investigating the beliefs of
students from other levels, linguistic backgrounds, or, for that matter, those of their
female counterparts. If the same finding is to be compared to other studies in the
literature, the study of Hyland and Hyland (2001) investigated the preferences of
many male and female students from different age groups, linguistic and authentic
backgrounds, which generated the diversity of their findings as to which type of
feedback students preferred. One assumption from this result is that ESL student
writers would not be very much negatively affected by the attitude of feedback they
receive, regardless of where it comes from, as long as it highlights their
shortcomings. Therefore, students were asked to focus on their peers’ errors more
than on praising their good points during the training week and the following
sessions, because these were ongoing developmental exercises, not a final marking
practice (i.e. they were formative not summative).
Due to the comparative nature of the study, more obviously in the fourth research
question, a decision was made that items of students’ responses that involve beliefs
and attitudes should be identified and categorised in both questionnaires. Because
of the topic of this project, the most prominent categories were naturally students’
beliefs concerning teachers’ feedback compared to the preferences of their peers in
127
ESL writing classes. It should be noted however that asking students about peer
feedback in the first questionnaire might not yield enough informed replies, because
bearing in mind the distinctive traditional methods of learning most EFL students are
used to in Saudi Arabia, it could be a totally novel idea to some. To be more precise,
half of the subjects who returned the first questionnaire have never been involved in
peer sessions prior to the experiment, see section 4.2.1 (the pre-experiment
questionnaire). However, the notions of autonomous/collaborative learning and
peer feedback in writing classes have been thoroughly clarified, explained, and
exemplified as much as possible, not only in the supplementary information included
in the copies of the questionnaire given to potential subjects, but also by the
instructors who were monitoring the process, including myself, as time and
resources permitted (see Index: 1
st
Questionnaire) to make sure that students had at
least some idea about the subject. In fact, the general impression of this research
population is an invaluable source of information. The data also gave an important
insight into how students would have initially perceived different learning
approaches, how will that affect their performances, will these preferences and
beliefs change according to different treatments they receive and how will that be
reflected in their actual writing, withstanding the aforementioned precautions.
By inspecting the descriptive results of the first questionnaire, it becomes obvious
that as far as teacher written feedback is concerned, the overwhelming majority of
students have very strong views in favour of this type of feedback (see graph 4.3 and
table 4.7 in the results chapter). In fact, not a single student described teacher
feedback as either (2) unimportant, or (1) very unimportant (on a Likert scale of 5),
128
which simply means that despite the reported shortcomings of this type of feedback
reported in studies such as Truscott (1996, 2004 & 2007), students would still like to
see more comments from teachers on their written work. More importantly, 65.7%
of students believed that such feedback is either (4) important, or (5) very
important, a result which gives a definite answer regarding how much ESL student
writers valued their teachers’ comments. Again, building on the evidence of these
results, it can be argued with a high level of certainty that such a finding does in fact
support that in the majority of similar studies, most of which reported how ESL
students appreciate teacher feedback in particular, as compared to other sources,
such as peer feedback (Montgomery & Baker, 2007; Ferris, 2002 & 1995; Hyland,
1998; Hedgcock & Lefkowitz, 1994; and Chaudron, 1984).
By moving to the other major theme of the study, the descriptive results of the first
questionnaire show that as far as peer feedback was concerned, graph 4.5 confirms
the assumption that student writers were very uncertain, even disapprove of this
type of feedback. In fact, 33 out of the valid 71 cases reported that peer feedback
was either ‘useless’ or ‘very useless’, as compared to only 10 students who thought
that peer feedback was useful/very useful (a ratio of over 3:1). This finding at that
early stage of the study simply reiterates the assumption that most students had a
negative attitude in general towards peer feedback, and when compared with the
earlier results of teachers’ feedback, it becomes evident that the latter was much
more desired than the former. By following a similar analogy, students in the first
questionnaire can be described as having more diverse attitudes towards peer
feedback once compared to their consistent beliefs regarding teacher feedback. In
129
fact, the mode of 3 signals that they were generally unsure about how useful peer
feedback could be, a result which at that stage was expected, given that as many as
37 students (or just over half of the research population) had never had been
involved in peer feedback sessions before. The other significant result is that despite
students’ unfamiliarity with PF exercises (or not), their general impression was that
of suspicion, not only by being unsure of their usefulness, but also claiming that such
exercises could yield negative results. If we look at graph 4.5 in the results chapter,
we will find that about half of the students (46.5%) believed that PF is either ‘useless’
or ‘very useless’. When we combine this number with that of those who had
negative attitudes towards peer feedback, we will be left with only 10 students
(14.1%) who thought that peer feedback could actually be ‘useful’ or ‘very useful’. In
other words, students at that stage were definitely not in favour of peer feedback,
and their responses towards teacher feedback in contrast show a much more
positive attitude towards it. These results make it possible to assume with
confidence that students were not eager to substitute their ‘traditional’ way of
learning, which in this case comes in the form of teacher feedback, with a more
unconventional, innovative way of learning, represented here by peer feedback.
Many possible reasons as to why students thought that peer feedback might not suit
their learning needs have been identified, including that (arranged in descending
order, according to how strongly students thought they had an impact): fellow
students did not possess the necessary linguistic skills to provide feedback (69% of
the subjects thought so); students were not qualified to give comments (53%);
students will not take the matter seriously (43%); correcting peers’ scripts can
embarrass some students (32%); students will not accept corrections from their
130
peers (23%); and finally, it is the teachers’ responsibility to provide feedback ( 21%).
Linguistic ability frequently seems to be of paramount importance to ESL students,
including subjects of this study, who questioned PF techniques mainly because they
believed that the linguistic level of their peers was lower than that of their teachers,
which supports the findings of many previous studies, including Ashwell (2000),
Ferris (2002), Hinkel (2004), Ellis et al. (2008), Bitchener (2008), and many others
(see section 1.2 in the literature review).
So, the investigations into students’ beliefs of different types of feedback can be
summarised as follows: while the overwhelming majority of students in the first
questionnaire (pre-experiment) reported that they believed teachers’ feedback was
a very important source of knowledge, there were some promising results as to how
they perceived the notion of collaborative learning, which includes peer feedback
exercises. These positive attitudes were further enhanced by training and actively
engaging a group of students to incorporate peer feedback sessions into their typical
writing classes.
5.2 How Can Peer Feedback Help Students Improve Writing Skills
To answer the second research question, which asks whether PF helps students to
improve their existing writing skills and gain new ones, how, and to what extent, it is
logical to conduct a comparative study which looks at how students fared in their
writing placement tests before and after the experiment. The assessment
procedures followed what Black and William (1998) describe as the four essential
elements to effective assessment and feedback, which are: 1) establishing a
131
recognized and measurable standard; 2) a means of identifying student
performances in relation to that standard; 3) a means of comparing the two levels;
and 4) a way to apply this information to alter the gap. More details about how the
writing tests were conducted are available in the methodology chapter section 3.3.2.
It must be acknowledged that the entry test as it was administered does not provide
a solid baseline data because students did not follow the word-length guideline
resulting in possible differences in evaluation of performance and because of the
variations in proficiency levels where some texts showed far greater number of
errors than others. Nevertheless, the results of the entry test can be used as an
indicator of students’ common errors in writing. The results can be compared to
those of the exit test but with caution given the way in which the entry test was
administered.
As far as the writing tasks are concerned, both groups (control and treatment)
showed significant improvement in their performances from their corresponding
results in the earlier entry test. On average, members of the PF group scored a much
lower number of errors per 100 words in every type investigated; the scores show a
significant drop from 12.8 to less than 6 in grammar, 6.15 to 2.33 in spelling, and
more substantially in punctuation and run-on sentences, which come at 6.84 to 0.93
and 1.94 to 0.09 respectively. The total number of errors significantly dropped from
a massive 27.4 to just 9.13 as a result. This result shows a significant improvement in
the level of accuracy but it must be treated with caution due to the limited number
of participating papers in the writing tests. An important question arises, which is
whether such a dramatic improvement in terms of local issues can be attributed,
132
wholly or partially, to peer feedback sessions. In order to address this question, it is
logical to see how the other group performed, given that both groups performed
under similar circumstances, with only the addition of peer sessions to the PF group.
The control group on its part showed improvements in their exit test as well
compared to results of the entry test. In fact, the control group without exception
performed better in the exit test in all four linguistic aspects investigated, not as well
as the PF group but better it has to be said. In other words, despite their positive
results, the scores were not as good as the PF group in all the four local issues
investigated. Firstly, here is a summary of how the control group showed
improvements since their entry test (using a similar test of errors per 100 words): for
grammatical errors, there was a significant drop from 12.8 to 8.61. This number
should be treated with caution because two participating texts shared 47 errors
between them which explains the high standard deviation of 7.16 shown on table
(4.5), despite this significant improvement, it was still not as much as that of the PF
group. The overall average of errors per 100 words of the control group stands at
around 16, compared to more than 27 in the entry test. However, the PF group, as
already seen, has a much lower average of around 9. The greatest contributors to
the higher average of the control group that was much less significant than in the PF
group are spelling and punctuation errors. Again, as the case with the treatment
group, these results should be treated as indicators rather than solid facts because of
the limited number of participating texts. One option was to ignore these results all
together but despite the relatively small number of participating texts, the analytic
assessment approach could be used in a larger scale follow up studies resulting in
133
more meaningful findings. Other intervening factors might have affected the overall
result of the PF group as well including the different type of discussion in the
classroom, additional access to tutorial time and the use of supplementary materials.
The language results of the treatment group also show that its members wrote
shorter but more accurate texts compared to their counterparts in the control
group. Far fewer errors in all aspects investigated were recorded in the PF group exit
test. However, spelling mistakes in the treatment group were more prevalent in
some papers than in others, and given that one paper for instance had 13 misspelled
words, while some others do not have a single error, the mean could have been
distorted as a result. The high SD of 3.79 confirms this assumption. The PF group
nevertheless did considerably better in grammar and punctuation compared to the
other group (with 5.64 and 0.91 for the PF group, compared to 10.11 and 7.11 for
the control group).
As far as qualitative measures are concerned, the PF group achieved a better overall
grade, reaching a mean score of 4, compared to 3.64 for the other group, the PF
group had a much more consistent mean result due to the lower SD. The overall
grade looks not only at language aspects, but at wider global issues as well, including
ideas, logic and organization, and hence the higher score, which indicates more
achievement in this area too. Both assessors noted that the works of the PF group
dealt with more advanced ideas, were better organized, and contained more well-
developed arguments. The scope of issues discussed followed better logical
transaction. In organizational terms, the sentences and paragraphs were also
134
constructed better. Despite the fact that the PF group wrote less on average, their
writing was reported to be more focused and to-the-point, with less redundant or
unnecessary information.
Finally, we look at the global issues of the writing tests where comparisons between
the two groups took place accordingly. Issues such rhetoric, logic, supporting
examples, and sufficient explanations were of interest. The results of the entry test
were diverse as seen in the results chapter. On the negative side, many papers
showed numerous occasions of chaotic and confused ideas, incorrect word-choice,
very basic sentences both grammatically and rhetorically with little or no transition
words, incorrect use of articles, weak rhetorical structure, excessive use of the
conjunction ‘and’ (an attribute to many Arab learners, see Aljamhoor, 2001), unclear
genre (comparative, argumentative), missing essential components of any paragraph
(e.g. a central theme, topic sentence and concluding sentence), and scarce or even
absence of supporting evidence and examples. On the other hand there have been
some good points though in considerably fewer number papers including smooth
flow of ideas, some good examples and evidence, good argument and occasionally
good transition of ideas.
In comparison however, when we inspect the comments given to the PF group texts,
it becomes apparent that students wrote more consistent texts as gathered evidence
shows that PF group members provided better explanations and reasons to support
their claims. Similarly, the examples provided were much related to the subject
discussed. Many texts showed good logical progress of ideas and a convincing
135
discourse from the most important issues to lesser ones. The PF group texts in
general seemed well connected, due to good use of transitional words and phrases,
an issue emphasized throughout the course. On the other hand, there were rare
instances of unnecessary repetitions, which were very limited indeed, and were
quite possibly related to specific individuals rather than indicating a systematic
problem with the group. Other problems noticed included over-general topic
sentences and incorrect word-choice. The control group performed fairly well in this
field as well, compared to their corresponding results in the entry test. A close
inspection of the comments given indicates that some good examples were provided
to support the argument, which is a noticeable improvement from the previous test.
Similarly, in terms of rhetoric and organization, there was a significant improvement
since the entry test, but on both occasions the PF group fared considerably better.
Despite the fact that many texts from the control group provided more examples to
support their argument, these examples were not as directly connected to the
central theme as the ones provided by the other group. There were also some
confused and unclear sentences in many of the texts of the control group, in which
the rate of repetition is more apparent than that of the PF group. Transition words
and phrases were a real concern in many scripts from the control group, resulting in
weak rhetorical structure, a problem that was far less prevalent in the PF group’s
texts.
The clause complexity analysis performed on texts produced by members of both
groups does not show much difference between them apart from Extension clauses
which were used more often by members of the control group (see tables 4.7 and
136
4.8 in the results chapter). Again even that result has to be taken with caution
because of the limited number of participating papers and despite the interesting
analysis I have decided not to focus too much on these results for now. Clause
complexity analysis can yield better results when a larger number of papers are
involved.
Given the results of both tests, it is now possible to address the second research
question, and argue that peer feedback sessions did in fact play a significant role in
helping students write better not only drafts that have been jointly revised, but also
later texts, at least in the short time span during which it was possible to investigate
the phenomenon in this research project. An interesting particularity about PF group
texts which has been noticed is that they were shorter than those of the control
group. In other words, the PF group wrote shorter but more accurate texts.
However, it must be said that the length of papers was not a determining factor in
assessing students’ writing, and was never treated as fundamental issue. Instructions
about the expected length of texts were available but they were also clearly meant
to be a guideline rather than a determining factor of the overall grade. The two most
important concerns, as already mentioned, were how organized and coherent
students wrote their papers in terms of logic and ideas, and how accurate they were
from a linguistic perspective.
It must be said that the results of the entry test were remarkably poor. In addition to
a lack of feedback, revision opportunities and training, there were arguably many
factors that might have contributed to the less than satisfactory performance
137
including: the timing of the test, which was at the beginning of the term; the fact
that some low-achieving students who initially registered on the course
subsequently dropped out; and unfamiliarity with the requirements of the course,
teachers, other students, course objectives, expected workload, and the nature of
the writing tasks. As alluded to earlier, all or some of these factors could have
affected the result to some extent, but more research might be required to indicate
the key agents with certainty (c.f. section 6.4 ‘recommendations for future
research’).
It has already been mentioned that the results of the writing tests proves that peer
feedback helped students improve their ESL writing. However, to better engage with
the second research question, the investigation should go beyond the results of the
tests and include results from the questionnaires and interviews as well. Such
inclusion gives a more humanistic approach towards the PF experiment, and one
reason that makes the following discussion different from the one already
mentioned is that it aims to find out more about how PF sessions helped students
write better as they see it themselves, which, in addition to the results of their actual
performance in the previous discussion, should give a better understanding of how
and to what extent such a technique worked.
Starting with the first questionnaire, the pattern of the results shown in table 4.09
(in the results chapter) indicates that there is a clear difference between the
expected and the observed values. For instance, the positive end of the scale
(‘useful’ and ‘very useful’) have fewer observed counts than expected, which
138
indicates that students who passed fewer writing courses had a less positive attitude
towards peer feedback. Another remarkable finding is that only students who had
passed one writing course outnumbered the expected value. Those who had passed
more than one seemed to have a less favourable attitude. The negative options
(‘useless’ and ‘very useless’) also show that most students had a more negative
attitude than expected, which, in the case of those who passed two courses, is very
noticeable. More observed values can also be found under ‘neither useless nor
useful’, with students who were still in their first writing course indicating that they
may prefer not to discuss something they are not well-informed about. The chi-
square test, however, tells us that these results are not reliable for two reasons.
Firstly, there are 15 cells of less than 5 expected values, rendering the results void.
Secondly, since the significance value (p-value) is much higher than 0.05, the
probability of error is very high. Recoding the values did not help much either, as
there were still cells of less than 5 counts. There might be a trend with regard to
students’ responses but unfortunately it is not statistically proven. The descriptive
results of the questionnaire show that 42.8% of students seemed to be willing to
receive only constructive criticism feedback, in comparison to a slightly higher
percentage (45.7%) who would prefer to have a combination of both praise and
criticism, a finding which very much correlates to that of Hyland and Hyland (2002)
reporting on ESL writing students. It is possible that students at this stage were not
looking for approval as much as ways of improvement. Another interesting factor
that might have affected students’ response is their gender, or to be more specific,
personal traits associated with their gender. Male students, as reported in Hyland
and Hyland’s (2002) study, tended to have similar attitudes towards constructive
139
criticism. In other words, they were less concerned about social approval or
encouragement than their female counterparts, but unfortunately due to constraints
of access (see the limitations section) resulting in the absence of the female voice in
this study, a significant aspect of the question remains unanswered.
As far as the interviews are concerned, the results show that all interviewees had
had some very positive attitudes towards peer feedback sessions. A respondent
commented on the experience as ‘I have a more important role in the classroom
than just attending and listening’ and another commented on the novelty of the idea
as ‘it was a good concept using different ways of learning.’ As a whole, most
respondents had a good experience and comments received from colleagues were
useful. For example, a respondent commented on that by saying ‘students have
more time per paper than a teacher so they can write longer and more detailed
comments’ and another said ‘my friends seem to be better aware of my mistakes.’
Three respondents commented on the concept of alternative ways of learning and
they believed that this was a valid yet interesting and exciting approach in writing
classes. Students were particularly happy with the fact that they had more
opportunities to discuss their writing problems with each other as opposed to
limited chances when teachers were the only ones in charge. Two interviewees
believed that because they could then play a greater role in decision making and
because they were not simply passive receivers of what teachers had to say, classes
were far more interesting, a point that goes perfectly in line with findings of similar
studies (including Lundstorm and Baker, 2009; Hinkel, 2004; Storch, 2004; Hyland,
140
2000; Reid, 2000; Ferris & Hedgcock, 1998). The following excerpt explains their
point of view: ‘the classes become more exciting to me than just listening to what
the teacher says’ Another student believed that he benefited a lot from comments
given to him by peers whose linguistic ability was considerably better than his ‘Good
students have better ideas and are well-informed about the subject being discussed’
which brings us yet again to the issue of which errors students were concerned
about. In this case, it became apparent that the upmost concern of ESL students was
once again their linguistic errors. Students mixed levels again was commented on by
another interviewee who thought that good students were the ones capable of
producing ideas and well-informed judgments when it comes to feedback. Another
issue I am glad that students were aware of is that of intended readership as an
interviewee commented: “… I very much liked the idea that I can now understand
how other students perceived my writing, I mean if they understand the meaning I
intended to convey then my writing should have been clear enough.”
When Interviewees were asked about how they benefited from these sessions, most
of them were happy with a particular characteristic of collaborative writing sessions
which was the fact that they now can express and defend their opinions more freely
as well as being able to discuss the comments they received from their peers. For
instance, an interviewer commented ‘I very much liked the idea that I can now
realise how other students perceive my writing. I mean if they understand the
meaning I intended to convey.’ These skills ultimately enhance students’ ability of
critical thinking and judgment. Other skills of similar importance that have been
141
developed according to the interviewees were their communication abilities and the
ability to be an active member of a group.
So, given the joint results of empirical studies including writing tests, questionnaires
and interviews, it can be argued that peer feedback does indeed help students
improve many writing skills not only in terms of linguistic achievements, but also the
social, sociocultural, cognitive and affective skills. It also made them aware of the
importance of collaborative learning and subsequently changed their beliefs about
peer and teacher-written feedback.
5.3 Students Experience in the Peer Feedback Group
Bearing in mind that the third research question looks into students’ experience with
peer feedback as they see it from their perspective, a more qualitative measure has
been utilised to gather and analyse the data which, in this case, consisted of
individual, one-to-one interviews with members of the PF group. As noted earlier in
the methodology chapter, one purpose of the interviews was to complement the
findings of the second questionnaire also involved members of the peer feedback
group. In this section, however, the discussion will rely mostly on the findings of the
interviews, for the aforementioned reason.
The collective results of interviews and second questionnaires showed that all
participants, regardless of their score in the exit test, had had more positive
attitudes towards peer feedback by the end of the experiment compared to results
of the first questionnaires. The interviewees for instance reported that many
142
learning and social skills had been progressively developed as a result of engaging in
collaborative learning activities, in the form of peer feedback sessions, especially in
terms of autonomous learning. A participant said ‘I really developed [the] skill of
defending and arguing my ideas in a scientific and systematic way.’ This finding also
goes perfectly in line with findings of similar studies, including Min (2006) and Miao
et al., (2006). For example, from a social point of view, students reported that they
could express their own ideas more openly and freely, with less apprehension than
was usually possible if they were to do the same with teachers as already seen in the
previous section. They could also give their own opinions and recommendations to
their peers, a role which was to some students a new experience in the sense that
they were doing a task that until recently had been exclusively performed by their
teachers. A less teacher-centred classroom and more student participation are two
essential components of modern teaching approaches, which encourage students to
take more responsibility of their own learning. As the previous section reveals, the
overall perception of students on peer feedback was indeed very positive even in a
culture which gives great authority to teachers.
However, students expectedly raised some concerns about peer feedback and it
should not be surprising to know that most of these were yet again related to their
peers’ level in English. For example, one participant in the second questionnaire
mentioned that he did not expect his colleague to correct linguistic errors if his level
was around or below his own, ‘[students] are at around my level in English so I don’t
expect them to correct all language errors’ although he did not explain on which
basis he made his decisions about his colleagues’ proficiency levels. Another believed
143
that he might even get incorrect comments from his peers but he was also aware
that despite that he could still benefit from discussing these comments with them.
An interviewee claims that because of peers’ supposed incompetence, the feedback
he received was not always reliable. Some students also commented on the social
boundaries that might hinder giving honest feedback. One interviewee thought that
it was very difficult for him to criticize someone’s writing if he did not know him. In
fact, most of these concerns have been reported in similar studies such as Ferris and
Min (2008), Hedgcock (2005), Rollinson (2005), Hinkel (2004), Saito and Fujita (2004),
and Hyland (2002) which makes us assume that they are naturally occurring
phenomena when students work with each other.
Despite these concerns, the fact of the matter remains that the volume of negative
or uncertain comments about the peer feedback experience was far less common
than comments approving it which in turn suggests that the overall impression was
very positive indeed. To summarise then, the overall perception of students
regarding their experience was very positive despite the few concerns regarding the
execution of these sessions and the linguistic level of their peers. Teachers however
must acknowledge these concerns and explicitly discuss them with their students
when it comes to classroom training.
5.4 Shift of Attitudes towards Teacher-Written and Peer Feedback
This question has been partially addressed in section 4.1 (above), which looks at
students’ beliefs and attitudes in a much broader sense but does not compare them
to theirs after the experiment. However, I am also more interested in this section to
144
trace such a shift of attitudes in a more detailed approach, based on the data
analysis and results which should eventually help provide explanations for such a
shift. As stated earlier, this change of attitude, especially towards peer feedback, was
remarkable in the sense that it happened in a relatively short time.
The discussion will be largely based on the combined results of questionnaires and
interviews, as well as the findings of previous studies. In fact, the results seem to
support the findings of the majority of previous research, for example Hinkel (2004),
Hyland (2003), Ferris (2002), Ashwell (2000), and Hedgcock and Lefkowitz (1996),
which mainly report that ESL students prefer teacher’s comments to those of their
peers, on the grounds of reliability, teachers’ level of experience and more
importantly teachers’ language proficiency level compared to their peers’, regardless
of the style and manner in which they are delivered. One interviewee for example
mentioned that ‘the teacher knows better because students can make errors
themselves.’ Despite students’ preference of teacher-written feedback, the majority
of students in this study were aware of educational, social and extra-curricular skills
they had improved as a result of engaging in peer sessions. Such an experiment in
turn had positively affected their perception of peer feedback. They were aware of
the importance of skills such as the ability to critically assess others’ work and to
defend their own ideas, both of which were essential components of peer feedback
exercises. The overall impression is that despite students’ initial resentment of peer
response equal to that of their teachers, the general idea has gradually become
accepted, and most students were happy to engage in more of the same in the
future writing classes.
145
Analysing the results of the two questionnaires (pre- and post-experiment) regarding
students’ beliefs about teacher-written feedback, it can be seen that the
overwhelming majority of students in the first questionnaire (i.e. pre-experiment)
had very strong views in favour of teacher-written feedback as already seen in
section 4.1, which, as far as literature is concerned, was greatly expected. By
inspecting table 4.7 and graph 4.3 in the results chapter, it was discovered that not a
single student described teacher feedback as either ‘unimportant’ or ‘very
unimportant’; on the contrary, over 65% of them described it as ‘important’ or ‘very
important’
Following a similar analogy, students in the first questionnaire had had more diverse
attitudes towards peer feedback compared to their consistent positive beliefs of
teacher-written feedback. In fact, the mode of 3 signals that they were mostly
unsure about how useful peer feedback could have been, a result which at that stage
was largely expected given that as many as 37 students or just over half of the
research population never had been involved in peer feedback sessions before. The
other significant result is that despite students’ unfamiliarity with PF exercises, their
general impression was that of uncertainty, not only because they were unsure how
useful they were, but also because they believed such exercises could yield negative
results. If we look at graph (4.5) in the results chapter, we find that about half of the
students (46.5%) believed that PF is either useless or very useless. When we
combine this number with that of those who had negative attitudes towards peer
feedback, we will be left with only 10 students (14.1%) who thought that peer
146
feedback could actually be useful. In other words, students at that stage were
definitely not in favour of peer feedback and their responses towards teacher
feedback in contrast show a much more positive attitude. These results make it
possible to assume that students at that stage were not eager to substitute their
‘traditional’ way of learning, which in this case comes in the form of teacher
feedback, with a more unconventional way of learning represented here by peer
feedback. Many possible reasons as to why students thought that peer feedback
might not suit their learning needs have been identified and they were (arranged
according to how strongly students thought they had an impact): fellow students did
not possess the necessary linguistic skills to provide feedback (69% of the subjects
thought so), students were not qualified to give comments (53%), students will not
take the matter seriously (43%), and to a lesser degree: correcting peers scripts can
embarrass some students (32%), students will not accept corrections from their
peers (23%) and finally the least reason that could possibly deter students from peer
feedback sessions was that they thought it was teachers’ responsibility to provide
feedback with 21% of students believing so. The linguistic ability once again seems to
be of paramount importance to ESL students including subjects of this study who
questioned PF techniques mainly because they believed that the linguistic level of
their peers was lower that of their teachers, which supports the findings of many
previous studies in the literature including Chaudron (1984), Ashwell (2000), Ferris
(2002), Hinkel (2004), Ellis et al., (2008), Bitchener (2008), and others.
The descriptive results of the questionnaire reveal that most of students were
engaged in peer feedback sessions at least five times during the course of the writing
147
class, with at least four opportunities for the remaining few. This is possibly not a
very extended experience, but given the fact that previous carefully-designed
orientation sessions were provided to students prior to taking a place in the sessions,
along with the constant presence of the instructor to guide them throughout the
different stages, this experience should be effective and of some value, to say the
least. Most students did both tasks involved in the sessions, which were responding
to their peers’ scripts, and receiving and discussing comments on their own writing.
The results of the second questionnaire tell a completely different story about peer
feedback compared to the previous one. It becomes evident from the qualitative
results of both questionnaires and interviews that students had a much more
positive attitude towards the usefulness of peer feedback, with around 42% of the
sample believing that it could be ‘useful’ or even ‘very useful’.
As far as what type of corrections they provided is concerned, the majority of
students believed that they focused very little on global issues. Surface errors on the
other hand were of more concern to students and almost all of the students
interviewed or involved in the questionnaire were concerned about issues such as
grammar, word-choice, punctuation and spelling. On both occasions however,
students did not report that they ‘never’ or ‘always’ looked at a specific category of
errors, which means that in the second instance, students still looked at linguistic
errors when responding to their peers’ writing at one point or another. They were
again asked a similar question about what type of comments they received from
their peers to be compared to what they provide; the majority again reported that
the comments received were regarding grammar, spelling, and punctuation, with
148
only two students who received comments on global issues as well. This finding gives
more evidence to support the theory that ESL students are more concerned about
their linguistic performance than other writing skills, despite attempts to shift the
focus from local issues towards wider global ones. Such a finding goes in line with
these of earlier studies such as Ellis et al., (2008), Bitchener (2008), Min (2006),
Hinkel (2004) and Ferris (2002 & 1995).
It is also interesting to note that the majority of students thought that peer feedback
can be a reliable or even a very reliable source of information, which was definitely
not the case at the beginning of the experiment, when attitudes towards peer
feedback were gauged to be quite the opposite. A very plausible explanation for
such a difference in attitudes is that all respondents to the second questionnaire had
been involved in peer feedback sessions at least four times in addition to the
orientation programme, while on the contrary over half of the subjects of the first
questionnaire had never been in one. When students were trained and engaged in
peer sessions they should have realized the objectives and potential benefits of
having them. Claiming that the comments they received from their peers were
reliable naturally presupposes that students accepted this type of feedback, and they
were more likely to have made positive changes to their writing in response to the
peer feedback received. It was noticed that as confidence grew in peer feedback as a
valid source of comments, the importance accorded to teachers’ comments
correspondingly declined somewhat. This shift is a remarkable change of attitude,
given the short time in which the experiment was conducted and the strongly
entrenched traditional educational experiences of the students.
149
However, despite this change of attitudes towards peer feedback, teacher written-
feedback was still of greater value to these students which was expected given that
they were ESL students who aim to improve not only their writing skills but their
English as well. The available evidence in the literature shows a similar conclusion in
studies such as Montgomery & Baker (2007), Ferris (2002 & 1995), Hyland (1998),
and Hedgcock & Lefkwitz (1994).
Bearing all the above arguement in mind, it can be argued with confidence that
students’ belief in peer feedback positively grew by the end of the course which
somehow comes at the expense of confidence in teacher-written feedback. An
important factor of this change was because they were trained to incorporate peer
feedback into their writing classes hence were able to assess it from a close range as
opposed to the views of other students whose opinions might be largely based on
their rationalization and preconceived ideas of it. Extensive training is a very
important factor in peer-triggered feedback and it can directly have a positive impact
on students’ revision types and quality of texts.
150
CHAPTER SIX: CONCLUSION
Overview of Chapter Six
This is the last chapter of this project. It comprises a summary of the present study,
implications for ESL teaching, limitations and recommendations for future research.
6.1 Summary of the Study
Peer feedback is a very effective tool in ESL writing classes, even in contexts where
more traditional views of learning and teaching are widespread. Having established
this, it should also be noted that the degree of successfulness largely depends on
factors like the type and extent of training students receive, their beliefs and
perceptions, and the level of teachers’ interference. Peer feedback in many aspects
is a collaborative skill that requires some degree of students’ interaction throughout
peer sessions. This is why peer feedback has been widely associated with writing
approaches such as the process and genre approach. In fact, there are numerous
advantages of integrating peer feedback in ESL writing classes. It was found that peer
feedback developed not only the final product, but it also helped improve many skills
including the ability to work with other learners with a group spirit.
The findings of this study give further support to the widespread, oft-cited theory in
the literature that ESL student writers in particular expect, value, and appreciate
feedback about their writing regardless of the source (Montgomery and Baker, 2007;
Miao et al., 2006; Ferris, 2002 and 1995; Hyland, 1998; Hedgcock and Lefkowitz,
1994) In other words, ESL students tend to believe that the more feedback they
151
receive, the more chances they have to develop their writing skills. However,
teachers’ feedback was still the most desired type of feedback among L2 writers,
even when they were trained to use other non-conventional types of feedback,
which in the case of this study was peer feedback, a belief which is based largely on
students’ assumption that their peers might not be as qualified as their teachers
when providing comments, due to many factors, especially linguistic proficiency and
experience.
Despite their apparent preference for teacher-written feedback, the overwhelming
majority of students eventually had positive attitudes towards peer feedback and
peer writing sessions when they were part of the experiment; probably not as
positive as towards teacher-written feedback, but positive enough to be rendered
effective. Students were also aware enough of teachers’ limited time to respond to
each and every error in their writing, hence feedback from other sources, including
peers and electronic software programmes, was necessitated. It was found that
students’ acceptance of peer feedback was largely affected by the type of training
they receive, i.e. when they were trained to use peer feedback, their attitude
towards it became more positive.
As far as types of errors were concerned, students were more worried about
linguistic errors than wider global issues, a finding which in fact does not come as a
surprise, as most ESL students in previous studies exhibit a similar attitude (Ellis et al,
2008; Montgomery and Baker, 2007; Miao et al., 2006, Hinkel, 2004; Ferris, 2002 and
1995).
152
The results of the exit test show a very significant improvement in the writing quality
of students who were trained to use peer feedback compared to the writing of the
other group. Students in the peer feedback group also reported that they benefited
from additional skills other than L2 writing, including the ability to work in a group,
developing critical thinking, greater autonomous learning, and the ability to defend
their ideas. Students also benefited from the less formal atmosphere when working
with their peers, which helped them discuss and exchange ideas more freely and
openly.
6.2 Implications for Teaching
Based on the findings of this study, it is recommended that peer feedback be
integrated in all ESL writing classes from as early a stage as possible. Obviously,
because of students’ lack of experience in pre-university education, extra training
sessions are required to familiarise them with this new technique, including the
different tasks and roles expected from them during these sessions, which could be
significantly different from what they are used to in teacher-centred approaches.
This recommendation goes perfectly in line with most previous studies including
Chaudron, (1984), Jacobs et al. (1998), Miao et al., (2006), and Ellis et al., (2008).
Another finding of interest which could have serious implications was that of
students’ concerns about their local errors, which come at the expense of other
types of errors, a finding which is also reported in similar studies such as Mendonça
and Johnson (1994) and Leki (1990). Obviously local errors need to be addressed at one
153
stage or another as students’ progress in the writing course, but they are
nevertheless not more important than other types of errors which students tend to
ignore. A more balanced approach is required where both types, local and global,
would be equally and consistently addressed. Even in peer feedback sessions when
teachers’ level of intervention is minimised, proper training and tools like checklists
should help students focus on global issues.
6.3 Limitations of the Study
There are inevitably limitations in this study that need to be acknowledged. They are
divided into three main categories, depending on where they come from.
Methods
The study used three data collection methods: writing tests (entry and exit tests),
questionnaires (pre- and post-experiment), and interviews, as mentioned in the
methodology chapter. The literature of classroom research also suggests that other
methods can also be used to collect data, including classroom observation and think-
aloud protocols. These two tools can be very useful in terms of observing and
documenting what students actually do during feedback sessions which, despite
being very important to studies of this kind, were not utilised here because this front
was not among the issues investigated. The writing tests were designed to
investigate students’ progress in the short-term, but given the time limit, it was not
possible to assess their performances in the long run.
154
Time Factor
Time limit affects almost every research project, this one included. The real shortage
of time experienced was during the data collection stage, as the research was bound
by fixed start and end dates of the term. The time limit inevitably affected the choice
of data collection tools. In other terms, time consuming tools such as think-aloud
protocols and classroom observation were replaced with more time-efficient tools.
Access to Participants
The study involved ESL students from one university in Saudi Arabia, making it
difficult to generalise the findings for the wider context of ESL teaching and learning
regionally and globally. Social constraints also meant that it was difficult to include
female students, even from the same institution, because they are taught
separately, and this constraint had to be borne in mind when the data collection plan
was designed.
Scope of the Research
The study compared and assessed two techniques of feedback in ESL writing classes,
whereas other feedback techniques such as conferencing and self-assessment also
exist. Teacher written-feedback was chosen as an example of a teacher-centred
approach to ESL writing teaching, to be compared against a more modern approach
in the form of peer feedback. The latter in particular was of interest because it
requires certain learning and social skills from students’ perspectives, as well as
being the product of collaborative learning, another aspect neglected by most
traditional teaching methods.
155
Electronic writing assessment programmes and online applications such as ETS
CRITERION and DIALANG, which could foster peer feedback exercises, were also
overlooked, because in many aspects these programmes could be very helpful in
classes with a large number of students or in distance learning situations, where
students from different parts of the world can review each other’s writing, exchange
ideas and comments online, however, neither of these two scenarios were
applicable to this study, hence they were excluded.
6.4 Recommendations for Future Research
It is worthwhile to consider carrying out more extensive research that includes other
possible factors likely to affect the final results. Such a study could include the
effects of gender, age, linguistic level, nationality, and linguistic background, in
addition to the factors already investigated. It is also possible to have a wider range
of students involved in the project, especially by avoiding being gender specific. This
was complicated in this study by the unique educational policies set by the
government (i.e. this is a more complicated issue than merely access). A possible
solution to overcome such restrictive legislation is to develop contacts in the female
sections who can act on behalf of the researcher, including carrying out usual
teaching load, distributing questionnaires, and interviewing students. In geographical
terms, participants can be drawn from a wider linguistic and demographic context to
help generalize the findings of the research.
156
It could be equally important to investigate how students interact and perform
during peer feedback sessions, which means including data collection tools like
classroom observation and think-aloud protocols. Such a study would shed more
light into students’ actual performances during peer sessions, and it gives further
insight into how specific skills are developed. Someone can extract useful
information from students’ interaction with each other in a way that makes it
possible to notice and assess them, and subsequently recommend how the ideal
type of interaction is going to be. As far as assessment procedures are concerned, a
study which incorporates electronic means of writing assessment and then
investigates and evaluates their effect on student writing would be highly
recommended. Moreover, the assessment of both writing tests was a tedious and
time-consuming task. As already mentioned, there are electronic tools that should
help to take some or most of this burden off the teachers, especially when a
considerably larger number of participants are included (see section 2.2.5). Finally,
from a methodological point of view, a longitudinal study which is capable of
assessing students’ development in the long run, as well as capturing any progress in
their writing over an extended period of time, is highly recommended. The current
literature shows that no previous studies of this kind exist.
In general, this proposed study should maximise the generalisability of research
findings to ESL writing classes across Saudi Arabia and the wider ESL context. It could
also yield more interesting results about students’ interaction, and the immediate
effect on writing in the short and long run in a way that helps to develop better
classroom teaching and instructions provided to students.
157
6.5 Self-Reflection
This is presumably the largest single piece of academic work I have carried out so far
and it surely had its impact on me academically and personally. It has also been a
demanding yet immensely interesting project for me. As a result of that, I have every
reason to believe that my research skills have considerably developed and I would
argue that I am better prepared now for future research than when I first enrolled in
the PhD programme. Such acquired skills should also be transferable to other
research fields in addition to ESL writing, that is not to undermine the important
position which ESL writing occupies but to stress the significance of other fields of
research. These disciplines include, but are not limited to, developing collaborative
learning environments, the use of technology in education and teaching in non-
Western countries. Similarly, the use of various data analysis tools like SPSS and
nVivo, as well as becoming aware of quantitative and qualitative measures in
educational research are two treasured skills I possibly would rely upon in upcoming
projects. Statistically speaking, I am confident that my ability to read and understand
various charts and figures has considerably improved thanks to my research project.
158
References
Aleid, S. (2000) The Use of Pictures and Drawings in Teaching English Paragraph
Writing in Saudi Arabia Schools, King Saud University: unpublished MA thesis.
Al-Hazmi, S. H. (2003) ‘EFL Teacher Preparation Programs in Saudi Arabia: Trends and
Challenges’, In TESOL Quarterly, Vol. 37 No. 2, pp 341 – 344.
Al-Hazmi, S. H. and Scholfield, P. (2007) ‘Enforced Revision with Checklist and Peer
Feedback in EFL Writing: The Example of Saudi University Students’, In Scientific
Journal of King Faisal University, Vol. 8 No. 2, pp 223 – 261.
Aljamhoor, A. (2001) ‘A Cross-Cultural Analysis of Written Discourse of Arabic-
Speaking Learners of English’, In Journal of King Saud University, Vol. 13, pp 25 –
44.
Alderson, J. C. (2000) ‘Technology in Testing: The Present and the Future’, In System,
Vol. 28 No.4, pp 593 – 603.
Alderson, J. C. and Huhta, A. (2005) ‘The Development of a Suite of Computer-Based
Diagnostic Tests Based of the Common European Framework’, In Language
Testing, Vol. 22 No. 3, pp 301 – 320.
Ashwell, T. (2000) ‘Patterns of Teacher Response to Student Writing in a Multiple-
Draft Composition Classroom: Is Content Feedback Followed by Form Feedback
the Best Method?’, In Journal of Second Language Writing, Vol. 9 No.3, pp 227 –
257.
Asiri, I. M. (1996) University’s EFL Teachers’ Feedback on Compositions and Students’
Reactions, University of Essex: unpublished PhD Thesis.
Atkinson, D. (2003) ‘L2 Writing in the Postprocess Era: Introduction’, In Journal of
Second Language Writing, Vol. 12 No. 1, pp 3 – 15.
Badger, R. and White, G. (2000) ‘A process genre approach to teaching writing’, In
ELT Journal, Vol. 54 No. 2, pp 153 – 160.
Bell, J. (2005) Doing Your Research Project: A Guide for First-Time Researchers in
Education, Health and Social Sciences, 4
th
Edition, Maidenhead: Open University
Press.
Berg, C. E. (1999) ‘The Effects of Trained Peer Response on ESL Students Revision
Types and Writing Quality’, In Journal of Second Language Writing, Vol. 8, pp
215 – 237.
159
Berger, V. (1990) ‘The Effects of Peer and Self-Feedback’, In California Teachers of
English to Speakers of Other Languages Journal, Vol. 3, 21 – 35.
Bersamina, F. V. (2009) ‘English as Second Language (ESL) Learners in Saudi Arabia’,
In Associated Content Society,
11 March 2009.
Bitchener, J. (2008) ‘Evidence in Support of Written Corrective Feedback’, In Journal
of Second Language Writing, Vol. 17 No. 1, pp 102 – 118.
Bitchener, J., Young, S. and Cameron, D. (2004) ‘The Effect of Different Types of
Corrective Feedback on ESL Student Writing’, In Journal of Second Language
Writing, Vol. 14 No. 3, pp 191 – 205.
Black, P. and William, D. (1998) ‘Assessment and Classroom Learning’, In Assessment
in Education, Vol. 5 No. 1, pp 7 – 71.
Blaxter, L., Hughes, C. and Tight, M. (2006), How to Research, 3
rd
Ed., Maidenhead:
Open University Press.
Bruthiaux, B. ‘Hold Your Courses: Language Education, Language Choice and
Economic Development’, In TESOL Quarterly, Vol. 36 No. 3, pp 275 – 296.
Burton, D. (2000) Research Training for Social Scientists: A Handbook for
Postgraduate Research, London: SAGE Publication.
Brown, J. D. and Rodgers, T. S. (2002) Doing Second language Research, Oxford:
Oxford University Press.
Bryman, A. (2004) Social Research Methods, 2
nd
Edition, Oxford University Press.
Cardelle, M., & Corno, L. (1981) ‘Effects on second language learning of variations in
written feedback on homework assignments’, In TESOL Quarterly, Vol. 15 No.3,
pp 251–261.
Chandler, J. (2003) ‘The Efficacy of Various Kinds for Improvement in the Accuracy
and Fluency of L2 Student Writing’, In Journal of Second Language Writing, 12,
pp 267 – 296.
Chaudron, C. (1984) ‘Effects of Feedback on Revisions’, In RELC Journal, Vol. 15, pp 1
– 14.
Clough, P. and Nutbrown, C. (2007) A Student’s Guide to Methodology: Justifying
Enquiry, 2
nd
Edition, London: SAGE Publications.
160
Cohen, A. D. (1987) ‘Students Processing of Feedback on Their Compositions’, In
Wenden, A. and Rubin, J. (eds.) Learning Strategies in Language Learning,
Prentice Hall International, pp 57 – 69.
Cohen, A. D. (1990) Language Learning: Insights for Learners, Teachers, and
Researchers. Heinle and Heinle Publishers: Boston.
Cohen, A. D. (1994) Assessing Language Ability in the Classroom, 2
nd
Ed, Boston:
Heinlie and Heinle.
Cohen, L., Manion, L. and Morrison, K. (2000) Research Methods in Education, 5
th
Edition, London: Routledge Falmer.
Cohen, L., Manion, L. and Morrison, K. (2007) Research Methods in Education, 6
th
Edition, London: Routledge Flamer.
Connor, U. and Asenvage, K. (1994) ‘Peer Response Groups in ESL Writing Classes:
How Much Impact on Revision?’, In Journal of Second Language Writing, Vol. 3
No. 3, pp 257 – 276.
Cumming, A. (2003) ‘Experienced ESL / EFL Writing Instructors’ Conceptualization of
Their Teaching: Curriculum Options and Their Implications’, In Kroll, B. (ed.),
Second Language Writing: Research Insights for the Classroom, Cambridge:
Cambridge University Press, pp 71 – 92.
Dempsey, M. S., PytlikZillig, L. M., and Burning, R. H. (2009) ‘Helping Preservice
Teachers Learn to Assess Writing: Practice and Feedback in a Web-Based
Environment’, In Assessing Writing, Vol. 14, pp 38 – 61.
Di Pardo, A., and Freedman, S.W. (1988) ‘Peer Response Groups in the Writing
Classroom: Theoretic Foundations and New Directions’, In Review of Educational
Research, Vol. 58 No. 2, pp 119 – 149.
Dudley-Evans, A. (1994). ‘Genre analysis: an approach to text analysis for ESP’, In M.
Coulthard (ed.) Advances in Written Text Analysis, London: Routledge Flamer.
Dunworth, M. (2007) ‘Joint Assessment in Inter-Professional Education: A
Consideration of Some of the Difficulties’, In Social Work Education, Vol. 26 No.
4, pp 414 – 422.
Ede, L. and Lunsford, A. (1990) Singular Texts / Plural Authors: Perspective on
Collaborative Writing, Illinois: Southern Illinois University Press.
Ellis, R. Sheen, Y. Murakami, M. and Takashima, H. (2008) ‘The Effects of Focused
and Unfocused Corrective Feedback in an English as a Foreign Language
Context’, In System, 36, pp 353 – 371.
161
Enginarlar, H. (1993) ‘Student Response to Teacher Feedback in EFL Writing’, In
System, Vol. 21, pp 193 – 204.
Escholz, P. A. (1980) ‘The prose models approach: using products in the process.’ In
T. R. Donovan and B. W. McClelland (eds.) Eight Approaches to Teaching
Composition, National Council of Teachers of English.
Ferris, D. (1995) ’Student Reactions to Teacher Response in Multiple-Draft
Composition Classrooms’, In TESOL Quarterly, Vol. 29 No. 1, pp 34 – 54.
Ferris, D. (1997) ‘The Influence of Teachers’ Commentary on Students Revisions’, In
TESOL Quarterly, Vol. 31 No. 2, pp 315 – 339.
Ferris, D. (1999) ‘The Case for Grammar Correction in L2 Writing Classes: A Response
to Truscott (1996)’, In Journal of Second Language Writing, Vol. 8 No. 1, pp 1 –
11.
Ferris, D. (2002) Treatment of Error in Second Language Student Writing, The
University of Michigan Press.
Ferris, D. (2007) ‘Preparing Teachers to Respond to Student Writing’, In Journal of
Second Language Writing, Vol. 16 No. 3, pp 165 – 193.
Ferris, D. and Hedgcock, J. S. (1998) Teaching ESL Composition: Purpose, Process, and
Practice, Lawrence Erlbaum Associates.
Ferris, D. and Hedgcock, J. (2004) Teaching ESL Composition: Purpose, Process, and
Practice (2
nd
Edition), Lawrence Erlbaum Associates: Mahwah, NJ.
Ferris, D. R. and Hedgcock, J. S. (2005) Teaching ESL Composition: Purpose, Process
and Practice, 2
nd
Edition, New Jersey: Lawrence Erlbaum Associates Publishers.
Ferris. D., Chaney, S. J., Komura, K., Roberts, B. J., and McKee, S. (2000)
‘Perspectives, Problem,s and Practices in Treating Written Error’, In Colloquium
presented at International TESOL Convention, Vancouver, March 14 – 18, 2000.
Ferris, D. and Roberts, B. (2001) ‘Error Feedback in L2 Writing Classes: How Explicit
Does It Need to Be?’, In Journal of Second Language Writing, Vol. 10, pp 161 –
184.
Freedman, S. W. (1987) Response to student writing (Research Report No. 23),
National Council of Teachers of English: Urbana, IL.
Gall, M. D., Brog, W. R. and Gall, J. P. (1996) Educational Research: An Introduction,
6
th
Ed, New York: Longman Publishers.
162
Gee, S. (1997) ‘Teaching writing a genre-based approach’, In Review of English
Language Teaching, Vol. 62, pp 24 – 40.
Gibbs, G. and Simpson, C. (2002) ‘How Assessment Influences Student Learning: A
Conceptual Overview’, In Student Support Research Group, 42/ 2002, Centre for
Higher Education Practice: The Open University. Available at
January 2007.
Gillham, B. (2000) The Research Interview, London: Contiuum.
Gillies, R. M. and Ashman, A. F. (eds.) (2003) Co-operative Learning, London:
Routledge Falmer.
Gray, J. (2000) ‘The ELT Course Book as Cultural Artifact: How Teachers Censor and
Adapt’, In ELT Journal, Vol. 45 No. 3.
Gubrium, J. F. and Holstein, J. A. (eds.) (2001) Handbook of Interview Research:
Context and Method, London: SAGE Publications.
Guénette, D. (2007) ‘Is Feedback Pedagogically Correct? Research Design Issues in
Studies of Feedback on Writing’, In Journal of Second Language Writing, Vol. 16
No. 1, pp 40 – 53.
Habeshaw, S. Gibbs, G. and Habeshaw, T. (1986) 53 Interesting Ways to Assess Your
Students, Bristol: Technical and Educational Services.
Hairston, M. (1982) ‘The winds of change: Thomas Kuhn and the revolution in the
teaching of writing’. In College Composition and Communication, Vol. 33, pp 76 –
88.
Hedgcock, J. and Lefkowitz, N. (1992) ‘Collaboraive Oral/Aural Revision in Foreign
Language Writing Instruction’, In Journal of Second Language Writing, Vol. 4, pp
51 – 70.
Hedgcock, J. and Lefkowitz, N. (1996) ‘Some Input on Input: Two Analysis on Student
Response to Expert Feedback in L2 Writing’, In The Modern Language Journal,
Vol. 80 N. 3, pp 287 – 308.
Hedge, T. (1988) Writing, Oxford University Press.
Henderickson, J. (1978) ‘Error Correction in Foreign Language Teaching: Recent
Theory, Research, and Practice’, In Modern Language Journal, Vol. 62, pp 387 –
398.
Hinkel, E. (2004) Teaching Academic ESL Writing: Practical Techniques in Vocabulary
and Grammar, Lawrence Erlbaum Associates.
163
Hollway, W. and Jefferson, T. (2000) Doing Qualitative Research Differently: free
association, narrative and the interview method, London: SAGE Publications.
Horowitz, D. (1986) ‘Process, not Product: Less than meets the eye’, TESOL
Quarterly, Vol. 20, No. 1, 141-44.
Houtkoop-Steenstra, H. (2004) Interaction and the Standardized Survey Interview:
The Living Questionnaire, Cambridge: Cambridge University Press.
Hyland, F. (1998) ‘The Impact of Teacher Written Feedback on Individual Writers’, In
Journal of Second Language Writing, Vol. 7 No. 3, pp 255 – 286.
Hyland, F. (2000) ‘ESL Writers and Feedback: Giving More Autonomy to Students’, In
Language Teaching Research, Vol. 4 No. 4, pp 33 – 54.
Hyland, F. and Hyland, K. (2001) ‘Sugaring the Pill: Praise and Criticism in Written
Feedback’, Journal of Second Language Writing, Vol. 10, 185 – 212.
Hyland, K. (1990) ‘Providing Productive Feedback’, In ELT Journal, Vol. 74, 279 – 85.
Hyland, K. (2002) Teaching and Researching Writing, Pearson: London.
Hyland, K. (2003) Second Language Writing, Cambridge University Press: Cambridge.
Hyland, K. (2007) ‘Genre Pedagogy: Language, Literacy and L2 Writing Instruction’, In
Journal of Second Language Writing, Vol. 16 No. 3, pp 148 – 164.
Hyland, K. and Hyland, F. (2006) ‘Feedback on Second Language Students’ Writing’,
In Language Teaching, Vol. 39, pp 83 – 101.
Hvitfeldt, C. (1986) ‘Guided Peer Critique in ESL Writing at the College Level’, Paper
presented at the Annual Meeting of the Japan Association of Language Teachers
International Conference on Language Teaching and Learning. Seirei Gakuen,
Hamamatsu, Japan. November 22 – 24, 1986. (ERIC ED 282438).
Jacobs, G. (1989) ‘Miscorrection in Peer Feedback in Writing Class’, In RELC Journal,
Vol. 20 No. 4, pp 68 – 76
Jacobs, G.M., Curtis, A., Braine, G. & Huang, S.Y. (1998) Feedback on student writing:
Taking the middle path. Journal of Second Language Writing 7(3): 307-317.
Johns, A. M. (2003) ‘Genre and ESL/ EFL Composition Instruction’, In Kroll, B. (ed.)
Exploring the Dynamics of Second Language Writing, Cambridge University
Press: Cambridge, pp 195 – 217.
164
Joyce, A. (2006) ‘Understanding Writing Beliefs of Advanced Writing Students’, In
Dissertation Abstracts International, A: The Humanities and Social Sciences, Vol.
67 No. 6.
Keats, D. M. (2000) Interviewing: Practical Guide for Students and Professionals,
Buckingham: Open University Press.
Keh, C. (1990) ‘Feedback in the Writing Process: a Model and Methods for
Implementation’, In ELT Journal, Vol. 44 No. 4, pp 294 – 304.
Kent, G. (2000) ‘Informed Consent’, In Burton, D. (ed.) Research Training for Social
Scientists: A Handbook for Postgraduate Researchers, London: SAGE
Publications, pp 81 – 87.
Kepner, C. G. (1991) ‘An Experiment in the Relationship of Types of Written
Feedback to the Development of Second-Language Writing Skills’, In The Modern
Language Journal, Vol. 75, pp 305 – 315.
Khuwaileh, A. A. and Shoumali, A. (2000) Writing Errors: A Study of the Writing
Ability of Arab Learners of Academic English and Arabic at University’, In Journal
of Language, Cultural and Curriculum, Vol. 13 No. 2, pp 174 – 184.
Kohonen, V. (1992) ‘Experiential Language Learning: Second Language Learning as
Cooperative Learner Education’, In Nunan, D. (ed.) Collaborative Language
Learning and Teaching, Cambridge: Cambridge University Press, pp 14 – 39.
Kranzler, J. H. (2007) Statistics for the Terrified, 4
th
Ed., Pearson Prentice Hall:
London.
Kvale, S. and Brinkmann, S. (2008) Interviews: Learning the Craft of Qualitative
Research Interviewing, 2
nd
Edition, SAGE Publications: Thousand Oaks, California.
Lavelle, E. and Zuercher, N. (1999) University Students, Beliefs about Writing and
Writing Approaches, Edwardsville: Department of Educational Leadership,
Sothern Illinois University.
Lee, I. (1997) ‘ESL Learners’ Performance in Error Correction in Writing’, In System,
Vol. 25 No. 4, pp 465 – 477.
Leki, I. (1990) ‘Coaching from the Margins: Issues in Written Response’, In Kroll, B.
(ed.) Second Language Writing: Research Insights for the Classroom, Cambridge:
Cambridge University Press, pp 57 – 68.
Leki, I. and Carson, J. (1997) ‘ “Completely Different Worlds”: EAP and the Writing
Experiences of ESL Students in University Courses’, In TESOL Quarterly, Vol. 31
No. 1, pp 39 – 69.
165
Li, X. (2007) ‘Identity and Beliefs in ESL Writing: From Product and Processes’, In TESL
Canada Journal, Vol. 25 No. 1, pp 41 – 64.
Liu, J. and Hansen, J. (2002) Peer response in second language writing classrooms,
The University of Michigan Press: Michigan.
Lundstorm, K. and Baker, W. (2009) ‘To Give is Better than to Receive: The Benefits
of Peer Review to the Reviewer’s Own Writing’, In Journal of Second Language
Writing, Vol. 18 No. 1, pp 30 – 43.
Luoma, S. and Tarnanen, M. (2003) ‘Creating a Self-Rating Instrument for Second
Language Writing: From Idea to Implementation’, In Language Testing, Vol. 20
No. 4, pp 440 – 465.
McDonough and McDonough (1997) Research Methods for English language
Teachers, London: Arnold.
McKay, S. L. (1992), Teaching English Overseas: An Introduction, Oxford: Oxford
University Press.
McWham, K., Schnackenberg, H., Sclater, J. and Abrami, P. C. (2003) ‘From Co-
operation to Collaboration: Helping Students Become Collaborative Learners’, In
Gillies, R. M. and Ashman, A. F. (eds.) Co-operative Learning, London: Routledge
Falmer, pp 69 – 86.
Mendonça, C. O. & Johnson, K. E. (1994) ‘Peer Review Negotiations: Revision
Activities in ESL Writing Instruction’, In TESOL Quarterly, Vol. 28 No. 4, pp 745 –
769.
Mertens, D. M. (1998) Research Methods in Education and Psychology: Integrating
Diversity with Quantitative and Qualitative Approaches, London: SAGE
Publications.
Messick, S. (1995) ‘Validity of Psychological Assessment’, In American Psychologist,
Vol. 50 No. 9, pp 741 – 749.
Miao, Y., Badger, R. and Zhen, Y. (2006) ‘A Comparative Study of Peer and Teacher
Feedback in a Chinese EFL Writing Class’, In Journal of Second Language Writing,
Vol. 15, pp 179 – 200.
Min, H. (2006) ‘The Effects of Trained Peer Review on EFL Students’ Revision Types
and Writing Quality’, In Journal of Second Language Writing, Vol. 15 No. 2, pp
118 – 141.
Min, H. (2008) ‘Reviewers Stances and Writer Perceptions in EFL Peer Review
Training’, In English for Specific Purposes, Vol. 27, pp 285 – 305.
166
Montgomery and Baker (2007) ‘Teacher-Written Feedback: Student Perceptions,
Teacher Self-Assessment, and Actual Teacher Performance’, In Journal of Second
Language Writing, Vol. 16 No. 2, pp 82 – 99.
Mooko, T. (1996) An Investigation into the Impact of Guided Peer Feedback and
Guided Self Assessment on the Quality of Composition Written by Secondary
School Students in Botswana, University of Essex: Unpublished PhD Thesis.
Muncie, J. (2002). ‘Finding a Place for Grammar in EFL Composition Classes’, In ELT
Journal, Vol. 56 No.2, pp 180 – 186.
Murray, D. E. (1992) ‘Collaborative Writing as a Literacy Event: Implications for ESL
Instructions’, In Nunan D. (ed.) Collaborative Language Learning and Teaching,
Cambridge: Cambridge University Press, pp 100 – 117.
Murray, R. (2006) How to Write a Thesis, 2
nd
Ed., Maidenhead, Berkshire: Open
University Press.
Nelson, G. L., & Murphey, J. M. (1993). ‘Peer response groups: Do L2 writers use peer
comments in revising their drafts?’, In Journal of Second Language Writing, Vol.
27, pp 135 – 142
Nicol, D. J. and Macfarlane-Dick, D. (2006) ‘Formative Assessment and Self-Regulated
Learning: A Model and Seven Principles of Good Feedback Practice’, In Studies in
Higher Education, Vol. 31 No. 2, 199 – 218.
Noël, S. and Robert J. (2003) ‘How the Web is Used to Support Collaborative
Writing’, In Behaviour and Information Journal, Vol. 22 No. 4, pp 245 – 265.
Nunan, D. (ed.) (1992) Collaborative Language Learning and Teaching, Cambridge:
Cambridge University Press.
Nunan, D. (1999) Second Language Teaching and Learning, Boston: Heinle and
Heinle Publishers.
Race, P., Brown, S. and Smith, S. (2004) 500 Tips on Assessment, London: Routledge
Flamer
Raimes, A. (1985) ‘What unskilled ESL students do as they write: A classroom study
of composing’, In TESOL Quarterly, Vol. 19, pp 229 – 258.
Raimes, A. (1991) ‘Out of the woods emerging traditions in the teaching of writing’,
In TESOL Quarterly, Vol. 25 No. 3, pp 407-30.
Robb, T., Ross, S., Shortreed, I. (1986) ‘Salience of Feedback on Error and Its Effect on
EFL Writing Quality’, In TESOL Quarterly, Vol. 20 No. 1, pp 82 – 94.
167
Rollinson, P. (2005) ‘Using Peer Feedback in the ESL Writing Class’, in ELT Journal,
Vol. 59 No. 1, pp 23 – 30.
Paltridge, B. (2002) ‘Thesis and Dissertation Writing: An Examination of Published
Advice and Actual Practice’, In English for Specific Purposes, Vol. 21 No. 2, pp 125
– 143.
Paulus, T. M. (1999) ‘The Effect of Peer and Teacher Feedback on Student Writing’, In
Journal of Second Language Writing, Vol. 8, pp 265 – 289.
Peterson, S. (2003) ‘Peer Response and Students’ Revisions of Their Narrative
Writing’, In Educational Studies in Language and Literature, No. 3, pp 239 – 272.
Pincas, A. (1982) Teaching English Writing. London: Macmillan.
Pol, J. Berg, B. A. M., Admiraal, W. F. and Simons, P. J. R. (2008) ‘The Nature,
Reception and Use of Online Peer Feedback in Higher Education’, In Journal of
Computer and Education, Vol. 51, pp 1804 – 1817.
Race, P., Brown, S. and Smith, B. (2004) 500 Tips on Assessment, London: Routledge
Flamer.
Radecki, P. M. and Swales, J. M. (1988) ‘ESL Student Reaction to Written Comments
on Their Written Work’, In System, Vol. 16 No. 3, pp 355 – 365.
Reid, J. (2000) ‘Responding to ESL Students’ Texts: The Myths of Appropriation’, In
Silva, T. and Matsuda, P. K. (eds.) Landmark Essays on ESL Writing, Hermagoras
Press, pp 209 – 224.
Richards, J. C. (1990) The Language Teaching Matrix, Cambridge: Cambridge
University Press.
Sachs, R. and Polio, C. (2007) ‘Learners’ Uses o Two Types of Written Feedback on an
L2 Writing Revision’, In Studies in Second Language Acquisition, Vol. 29 No.1, pp
67 – 100.
Sadler, D. R. (1998) ‘Formative Assessment: Revisiting the Territory’, In Assessment in
Education, Vol. 5 No. 1, pp 77 – 84.
Saito, H. (1994) ‘Teachers’ Practices and Students’ Preferences for Feedback on
Second Language Writing: a Case Study of Adult ESL Learners’, In TESL Canada
Journal, Vol. 11 No. 2, pp. 46 – 70.
Saito, H. and Fujita, T. (2004) ‘Characteristics and User Acceptance of Peer Rating in
EFL Writing Classroom’, In Language Teaching Research, Vol. 8 No.1, pp 31 – 54.
168
Seidel, J. and Kelle, K. U. (1995) ‘Different Functions of Coding in the Analysis of
Data’, In Kelle, K. U. (ed.), Computer Aided Qualitative Data Analysis: Theory,
Methods and Practice, CA: Sage Publications.
Slavin, R. E. (1983) Cooperative Learning, New York: Longman.
Sommers, N. (1982) ‘Responding to Student Writing’, In College Composition and
Communication, Vol. 33 No.2, pp 148 – 156.
Storch, N. (2004) ‘Writing: Product, Process, and Students’ Reflections’, In Journal of
Second Language Writing, Vol. 14 No. 3, pp 153 – 173.
Thompson, P. (1999) ‘Exploring the Contexts of Writing: Interviews with PhD
Supervisors’, In Thompson, P. (ed.) Issues in EAP Writing Research and
Instruction, Reading: University of Reading.
Tierney, W. G. and Dilley, P. (2001) ‘Interviewing in Education’, In Gubrium, J. F. and
Holstein, J. A. (eds.) Handbook of Interview Research: Context and Method,
London: SAGE Publications, pp 453 – 472.
Topping, K. J. (1998) ‘Peer Assessment between Students in College and University’,
In Review of Educational Research, Vol. 68 No. 3, pp 249 – 267.
Topping, K. J. (2000) Peer Assisted Learning: A Practical Guide for Teachers,
Cambridge, MA: Brookline Books.
Tribble, C. (1996) Writing, Oxford: Oxford University Press.
Truscott, J. (1996) ‘The Case against Grammar Correction in L2 Writing Classes’, In
Language Learning, Vol. 46 No. 2, pp 327 – 369.
Truscott, J. (2004) ‘Evidence and conjecture on the effects of correction: A response
to Chandler’, In Journal of Second Language Writing, Vol. 13, pp 337 – 343.
Truscott, J. (2007) ‘The Effect of Error Correction on Learners’ Ability to Write
Accurately’, In Journal of Second Language Writing, Vol. 16 No. 4, pp 255 – 272.
Ulicsak, M. H. (2004) ‘ ‘How did it know we weren’t talking?’: An Investigation into
the Impact of Self-Assessment and Feedback in a Group Activity’, In Journal of
Computer Assisted Learning, Vol. 20, pp 205 – 211.
Urzua, C. (1987) ‘ “You stopped too soon”: second language children composing and
revising’, In TESOL Quarterly, Vol. 21 No.2, pp 297 – 302.
Uzawa, K. (1996) ‘Second Language Learners’ Processes of L1 Writing, L2 Writing,
and Translation from L1 to L2’, In Journal of Second language Writing, Vol. 5 No.
3, pp 271 – 294.
169
Wallace, M. J. (1998) Action Research for Language Teachers, Cambridge: Cambridge
University Press.
Walliman, N. (2001) Your Research Project: A step-by-step guide for the first time
researcher, London: SAGE Publications.
Ware, P. D. and O’Dowd, R. (2008) ‘Peer Feedback on Language Form in
Tellecollaboration’, In Language Learning and Technology, Vol. 12 No. 1, pp 43 –
63.
Weigle, S. C. (2002) Assessing Writing, Cambridge: Cambridge University Press
Weir, C. J. (2005) Language Testing and Validation: An Evidence-Based Approach,
New York: Palgrave McMillan.
Whitfield, B. and Pollard, J. (1998) ‘Awareness Raising in the Saudi Arabian
Classroom’, In Richards, J. C. (ed.), Teaching in Action, Alexandria: TESOL, pp 143
– 149.
White, R. and Arndt, V. (1991) Process Writing, London: Longman
White, M. J. and Bruning, R. (2005) ‘Implicit Writing Beliefs and their Relation to the
Writing Quality’, In Contemporary Educational Psychology, Vol. 30 No. 2, pp 166
– 189.
Wu, S. R. (2006) ‘A Comparison of Learners’ Beliefs about Writing in Their First and
Second Languages: Taiwanese Junior College Business-Major Students Studying
English’, In Dissertation Abstracts International, A: The Humanities and Social
Sciences, Vol. 64 No.2.
Yan, G. (2005) ‘A Process Genre Model for Teaching Writing’, In English Teaching
Forum, Vol. 43 No. 3, pp 18 – 26.
Yarrow, F. and Topping K. J. (2001) ‘Collaborative Learning: The Effects of
Metacognitive Prompting and Structured Peer Interaction’, In British Journal of
Educational Psychology, Vol. 71, pp 261 – 282.
Zamel, V. (1976) ‘Teaching composition in the ESL classroom: What we can learn
from research in the teaching of English’, In TESOL Quarterly, Vol. 10, pp 67 – 76.
Zamel, V. (1983) ‘The composing process of advanced ESL students: Six case studies’,
In TESOL Quarterly, Vol. 17, pp 165–187.
Zellermayer, M. (1989) ‘The Study of Teachers’ Written Feedback of Students’
Writing: Changes in Theoretical Considerations and The Expansion of Research
Contexts.’, In Instructional Science, Vol. 18, pp 145 – 165.
170
Zhang, S. (1995) ‘Reexamining the Affective Advantage of Peer Feedback in the ESL
Writing Class’, In Journal of Second Language Writing, Vol. 4 No. 3, pp. 209 –222.
171
Appendix (A) Pre-Pilot Study Checklist
Dear Colleague,
Please find attached a copy of the computer-based questionnaire I intend to use to collect data from ESL students. This is for my PhD
project which investigates the effectiveness of two feedback techniques used in ESL writing classes. Please have a thorough look at
it, try to answer it (as if you were an ESL student) and then give me your valuable opinion via my e-mail G.M.Grami@ncl.ac.uk. You
may find the following points to consider helpful (2 pages):
1- Layout
– Do you think that the layout of the questionnaire is user-friendly (tables, font type, font size, colour scheme … etc)?
Yes Needs Improvement
What improvements are needed? Type in box
– Do you find it easy to answer the questions using textboxes (e.g. the name question), drop boxes (i.e. Please choose one:), and
check boxes (Yes/No)?
Yes Needs Improvement
What improvements are needed? Type in box
– Are there any spelling, grammatical, organizational or typographical errors?
Yes: what are they? Type in box
No
2- Questions
– Have you got reservations about any of the asked questions?
Yes: which one(s) Type in box
No
– Do you think that the questions asked are enough?
Yes
No (what else shall I include in the questionnaire?) Type in box
– Are there any questions you didn’t understand/completely understand?
Yes: which one(s) Type in box
No
– Are there any superfluous questions?
Yes: which one(s) Type in box
No
– Do you think that it will be more convenient if the questionnaire is in Arabic instead?
Yes (why) Type in box
No, the language used is simple and easy to understand.
3- Time Allocated
– How long did it take you to answer the questionnaire?
Less than 20 mins. 20 – 30 mins. over 30 mins.
Any further recommendations? Type in box
172
4- Finally, have you noticed any of the following issues? Or have they been properly avoided?
Overly long items Yes No
Unclear or ambiguous items Yes No
Negative items Yes No
Overlapping choices in items (e.g. my best place is: a) Jeddah b) Saudi
Arabia where Jeddah overlaps with Saudi Arabia)
Yes No
Items across two pages Yes No
Double-barreled items (i.e. asking about two things simultaneously) Yes No
Loaded word items (emotional words such as naturally) Yes No
Absolute word items (such as ‘all’, ‘every’, ‘always’ and ‘never’) Yes No
Leading items (i.e. suggesting the answer) Yes No
Prestige items (making students answer in a way they believe makes them
perceived better e.g. students claim they read more than they actually do)
Yes No
Embarrassing items Yes No
Biased items (questions that indicate bias and prejudice against a specific
group of people)
Yes No
Items at the wrong level of language Yes No
Items that respondents are incompetent to answer Yes No
Assuming that everyone has an answer to all items Yes No
Making respondents answer items that don’t apply Yes No
Irrelevant items Yes No
Writing superfluous information into items Yes No
From Brown and Rodgers (2002) adapted from Brown (1997)
(Thanks J Grami)
173
Appendix (B) Cover Page and Instructions for the Computerised Questionnaire
TEACHER FEEDBCAK VERSUS PEER FEEDBACK
PLEASE READ ALL INSTRUCTIONS BEFORE ANSWERING THE QUESTIONNAIRE
This is an electronic, MS-Word based questionnaire that normally requires basic knowledge in using the keyboard and
the mouse to complete. Please have a look at the questionnaire and try to familiarize yourself with its items and
layout.
DO NOT ANSWER NOW, Read the remaining instructions first.
Some items require you to type information. In order to do so, click on the grey text box and start typing your answer.
However, most items are in multiple-choice form. In order to answer the question, simply click on the drop box
usually titled ‘Please choose one:’ to display a list of available options then click on the most appropriate answer.
When applicable, you can give further information/comments to the option you chose. For example, you can
comment on an item, provide an explanation to your answer, elaborate on your choice … etc. In order to do so,
simply click on the grey text box and type your information.
Please note that the layout of questionnaire is very sensitive. For example, if you opt to provide extra information, the
containing cell will automatically adapt its shape to enclose your answer. This technical procedure might distort the
original format of the questionnaire resulting in some items running across two pages. Similarly, the wording of some
options of the multiple-choice items might be remarkably longer than others; choosing them is likely to produce
similar consequences. There is nothing wrong with the format distortion itself but one possible negative effect is that
the questionnaire will become harder to follow. In order to minimize any undesired effects, please do familiarize
yourself with the questionnaire in its original form BEFORE answering it.
Thank You
174
Appendix (C) First Questionnaire: Arabic Online Version
معلومات عامة: الجزء األول
:معلومات أساسية عامة
السنة الدراسية في الجامعة -1 العمر -2
)اختياري(االسم
االولى عاما18أقل من
ربعأكتب في الم
):أو من التمهيدي( معلومات عن تعليمك في المراحل التي سبقت الجامعة من االبتدائية فصاعدا
مدى التركيز على مهارات الكتابة خالل هذه -5
السنوات
عدد السنوات التي درست فيها اللغة اإلنجليزية قبل -4
التحاقك بالجامعة
نوعية التعليم -3
قليال
سنوات6أقل من
مدرسة خاصة
هل كان تخصص اللغة اإلنجليزية هو خيارك األول في الجامعة؟ -6
ال نعم
مواد القواعد إذا بما فيها (كم عدد مواد الكتابة باللغة اإلنجليزية التي نجحت فيها في القسم -7
؟)كتابةاشتملت على تمارين
أختر االجابة الصحيحة
أختر االجابة الصحيحة ما مدى استفادتك علمياً من هذه المواد؟ 7-1
هل كانت مناهج تدريس اللغة اإلنجليزية في القسم مختلفة عن -8
مثيالتها في المراحل الدراسية قبل الجامعة؟
أختر االجابة الصحيحة
هل تستطيع أن توضح , “نعم”إذا كانت أجابتك ب
…)طريقة التدريس , المنهج(أكثر؟
أكتب في المربع
أختر االجابة الصحيحة كيف تقيم نفسك كطالب لغة إنجليزية؟ -9
كيف تقيم مهاراتك في الكتابة باللغة اإلنجليزية؟ -10
أختر االجابة الصحيحة
التعلم الذاتيما مدى فعالية , برأيك -11
1
أختر االجابة الصحيحة في فصول تعليم الكتابة باللغة اإلنجليزية؟
أسئلة عن التصحيحات و التعليقات التي يعطيها مدرسو الكتابة باللغة اإلنجليزية: الجزء الثاني
ما مدى أهمية التصحيحات و التعليقات التي يكتبها مدرسو -12
الكتابة في فصول اللغة اإلنجليزية؟
أختر االجابة الصحيحة
1
هو مفهوم حديث في علم تدريس اللغات يرتكز على مبدأ إشراك الطالب في العملية التعليمية أو االعتماد على النفس في التعلم autonomous learningالتعلم الذاتي
بحيث يكون قادرا على تحديد نقاط ضعفه و إيجاد الحلول بنفسه بدال من االعتماد الكلي على المدرس
175
ما مدى التزامك بالتعليقات التي يكتبها لك مدرسو , بشكل عام -13
اللغة عند مراجعتك للقطعة التي كتبتها؟
أختر االجابة الصحيحة
ما مدى الفائدة التي تحصلت عليها من خالل , من خالل تجربتك -14
التعليقات التي يكتبها مدرسو اللغة في تطوير مستواك اللغوي عموما
و مستوى الكتابة على وجه الخصوص؟
أختر االجابة الصحيحة
ما هي النقاط التي تود أن يوليها مدرس الكتابة معظم اهتمامه -15
عندما يصحح لك ورقتك؟
أختر االجابة الصحيحة
أكتب النقاط ” غير ذلك”إذا اخترت
في المربع
ما مدى تكرار التصحيحات و التعليقات التي تحصل عليها من -16
مدرس الكتابة؟
أختر االجابة الصحيحة
التصحيحات و التعليقات التي يوصي بها مدرس فهممن السهل -17
ما مدى صحة هذه العبارة؟. الكتابة
أختر االجابة الصحيحة
التصحيحات التي يوصي بها مدرس تنفيذهل من الممكن -18
الكتابة؟
أختر االجابة الصحيحة
كيف تريد أن يكون موقف مدرس اللغة من كتابتك عندما يعلق -19
لورقتك؟ المراجعةعليها خالل
أختر االجابة الصحيحة
كيف تريد أن يكون موقف مدرس اللغة من كتابتك عندما يعلق -20
؟النسخة النهائيةعلى
أختر االجابة الصحيحة
:من حيثكيف تختلف فصول تدريس الكتابة باللغة اإلنجليزية في القسم عن مثيالتها في مراحل التعليم السابقة , من خالل تجربتك
أختر االجابة الصحيحة مدى تحكم المدرس بسير األمور -21
أختر االجابة الصحيحة الدور الذي يلعبه الطالب في عمليات اتخاذ القرار -22
و التعلم عن طريق مشاركة الزمالء التعلم الذاتيالتشجيع على مبدأ 2-13
Group Work
أختر االجابة الصحيحة
أختر االجابة الصحيحة النقاط التي يُهتم بها عند مراجعة الكتابة 2-15
التصحيحات و التعليقات من زمالئك الطالب: الجزء الثالث
و إبداء آرائكم حولها؟ لك أن عملت مع زمالئك في فصول اللغة اإلنجليزية لتصحيح كتابات بعضكم هل سبق
ال نعم
هل تعتقد أن مشاركة الطالب بعضهم البعض في التصحيح و التعليق أمر إيجابي؟
ال نعم
)أختر واحد أو أكثر من األسباب التالية(ما هي األسباب التي من الممكن أن تؤدي لعدم نجاح فكرة تصحيح أوراق الزمالء
الطالب غير مستعدين لتصحيح أوراق بعضهم بسبب نقص التدريب الالزم لعمل ذلكالن
الن الطالب ال يملكون القدرات اللغوية الالزمة
الن الطالب لن يتقبلوا التعليقات و التصحيحات من زمالئهم
الن الطالب لن يأخذوا األمر بجدية
الن تصحيح أوراق الزمالء قد يسبب الحرج لبعض الطالب
الن تصحيح األوراق هي من واجبات المدرس و ليس الطالب
) أكتب في المربع(أذكرها , غير ذلك
الرجاء اآلن حفظ الملف و إرساله بواسطة البريد اإللكتروني إلى
G.M.Grami@ncl.ac.uk
إذا .احثشكرا جزيال لمشاركتك القيمة و تأكد بأن المعلومات ستعامل بخصوصية تامة و ستستعمل فقط ألغراض البحث العلمي و لن يطلع عليها غير الب
.رغبت بالتواصل أو االستفسار يمكنك االتصال بالبريد اإللكتروني أعاله
176
Appendix (D) First Questionnaire: Arabic Paper-Based Version
معلومات عامة عنك و عن تعليمك: الجزء األول
):أمام اإلجابة الصحيحة √ضع عالمة (السنة الدراسية في الجامعة -1
ما بعد الرابعة الرابعة الثالثة الثانية األولى
:الفئة العمرية -2
24أكثر من 24 – 18بين عاما18اقل من
):أو من التمهيدي( معلومات عن تعليمك في المراحل التي سبقت الجامعة من االبتدائية فصاعدا
:نوعية التعليم -3
ومية و خاصةمدارس حك مدرسة حكومية مدرسة خاصة
:عدد السنوات التي درست فيها اللغة اإلنجليزية قبل التحاقك بالجامعة -4
سنوات 8أكثر من سنوات 8 – 6من سنوات 6أقل من
:مدى التركيز على مهارات الكتابة خالل هذه السنوات -5
كثير متوسط قليل
خيارك األول في الجامعة؟ هل كان تخصص اللغة اإلنجليزية هو -6
ال نعم
باللغة اإلنجليزية التي نجحت فيها في القسم؟ مواد الكتابةكم عدد -7
أكثر من مادتين مادتان مادة واحدة ال يوجد
قبل الجامعة؟هل كانت مناهج تدريس اللغة اإلنجليزية في القسم مختلفة عن مثيالتها في المراحل الدراسية -8
ال نعم
)اختياري(؟ …)المواضيع , طريقة التدريس, المنهج(هل تستطيع أن توضح أكثر
…………………………………………………………………………………………………………………………
…………………………………………………………………………………………………………………………
كيف تقيم نفسك كطالب لغة إنجليزية عموما؟ -9
ممتاز جيد جدا جيد متوسط ضعيف
اإلنجليزية على وجه الخصوص؟كيف تقيم مهاراتك في الكتابة باللغة -10
ممتاز جيد جدا جيد متوسط ضعيف
التعلم الذاتيما مدى فعالية , برأيك -11
2
في فصول تعليم الكتابة باللغة اإلنجليزية؟
مفيد جدا مفيد لحد ما عديم الفائدة
2
حديث في علم تدريس اللغات يرتكز على مبدأ إشراك الطالب في العملية التعليمية هو مفهوم أو االعتماد على النفس في التعلم autonomous learningالذاتي التعلم
بحيث يكون قادرا على تحديد نقاط ضعفه و إيجاد الحلول بنفسه بدال من االعتماد الكلي على المدرس
177
يكتبها مدرس اللغة اإلنجليزيةأسئلة عن التصحيحات و التعليقات التي : الجزء الثاني
ما مدى أهمية التصحيحات و التعليقات التي يكتبها مدرسو الكتابة في فصول اللغة اإلنجليزية؟ – 12
دائماً مهمة أحياناً مهمة غير مهمة
لتي كتبتها؟ما مدى التزامك بالتصحيحات التي يكتبها لك مدرس اللغة عند مراجعتك للقطعة ا, بشكل عام -13
دائماً أحياناً نادراً
ما مدى الفائدة التي تحصلت عليها من خالل التصحيحات التي يكتبها مدرس اللغة في تطوير مستواك اللغوي عموما و مستوى , من خالل تجربتك -14
الكتابة على وجه الخصوص؟
دائماً مفيد أحياناً مفيد غير مفيد
هي النقاط التي تود أن يوليها مدرس الكتابة معظم اهتمامه عندما يصحح لك ورقتك؟ ما -15
كل ما سبق األفكار و المنطق القواعد اللغوية و الترتيب
ما مدى تكرار التصحيحات و التعليقات التي تحصل عليها من مدرس الكتابة؟ -16
على كل التمارين على بعض التمارين نادراً ما يصحح
ما مدى صحة هذه العبارة؟.” التصحيحات و التعليقات التي يوصي بها مدرس الكتابة فهممن السهل ” -17
دائماً صحيحة أحياناً غير صحيحة
التصحيحات التي يوصي بها مدرس الكتابة؟ تنفيذ و تطبيقهل من الممكن -18
دائماً أحياناً نادراً
لورقتك؟ المراجعةكيف تريد أن يكون موقف مدرس اللغة من كتابتك عندما يعلق عليها خالل -19
غير متأكد كل ما سبق نقد و تنبيه مدح و تشجيع
؟النسخة النهائيةكيف تريد أن يكون موقف مدرس اللغة من كتابتك عندما يعلق على -20
غير متأكد كل ما سبق نقد و تنبيه مدح و تشجيع
:فصول تدريس الكتابة باللغة اإلنجليزية في القسم عن مثيالتها في مراحل التعليم السابقة من حيث تختلف كيف, من خالل تجربتك
:مدى تحكم المدرس بسير األمور -21
ال يوجد اختالف مدرس الجامعة يتحكم أقل مدرس الجامعة يتحكم أكثر
:الدور الذي يلعبه الطالب في عملية اتخاذ القرار -22
ال يوجد اختالف طالب الجامعة يتحكم أقل طالب الجامعة يتحكم أكثر
Group Work):(و التعلم عن طريق مشاركة الزمالء التشجيع على مبدأ التعلم الذاتي -23
ال يوجد اختالف مراحل السابقةأكثر في ال أكثر في الجامعة
:النقاط التي يُهتم بها عند مراجعة الكتابة -24
ال يوجد اختالف أكثر على األفكار و البالغة و المنطق أكثر على القواعد و التنظيم و الترقيم
178
التصحيحات و التعليقات من زمالئك الطالب: الجزء الثالث
و إبداء آرائكم حولها؟ عملت مع زمالئك في فصول اللغة اإلنجليزية لتصحيح كتابات بعضكمهل سبق لك أن -25
ال نعم
هل تعتقد أن مشاركة الطالب بعضهم البعض في التصحيح و التعليق و إبداء الرأي أمر إيجابي؟ -26
ال نعم
):أختر واحد أو أكثر من األسباب التالية(نجاح فكرة تصحيح أوراق الزمالء ما هي األسباب التي من الممكن أن تؤدي لعدم -27
الن الطالب غير مستعدين لتصحيح أوراق بعضهم بسبب نقص التدريب الالزم لعمل ذلك
الن الطالب ال يملكون القدرات اللغوية الالزمة
الن الطالب لن يتقبلوا التعليقات و التصحيحات من زمالئهم
الن الطالب لن يأخذوا األمر بجدية
الن تصحيح أوراق الزمالء قد يسبب الحرج لبعض الطالب
الن تصحيح األوراق هي من واجبات المدرس و ليس الطالب
…………………………………………………………………………………….أذكرها , غير ذلك
……………………………………………………………………………………………………………………
هل لديك أي تعليقات أو آراء أو أفكار أخرى تريد أضافتها؟ -28
………………………………………………………………………………………………………………………………
………………………………………………………………………………………………………………………………
………………………………………………………………………………………………………………………………
………………………………………………………………………………………………………………………………
………………………………………………………………………………………………………………………………
………………………………………………………………………………………………………………………………
شكرا لمشاركتك القيمة و تأكد بأن جميع المعلومات ستعامل بسرية و لن يطلع عليها غيري
G.M.Grami@ncl.ac.uk: يمكنك التواصل عن طريق االيميل إن رغبت
179
Appendix (E) First Questionnaire English Version
Part One: General Information
1- Background Information:
Name (optional)
Age Year of Study in the University
Write in box
Below 18
First
2- Educational Background:
Type of Formal Education
Years Learning English Prior to
University
Focus on Writing Skills During School
Years
Public School
Less than 3 Years
1- Little
Was English Major your first choice? Yes No
1.1 How many English writing courses have you
successfully completed in the Department (including
Grammar and Composition courses LANE 101 and LANE
103)?
Please choose one:
1.2 How much do you think you have benefited from
these English writing courses?
Please choose one:
1.3 How different are the Department’s writing courses
from their counterparts in formal education?
Please write your answer in box (i.e. better textbooks,
different writing tasks … etc):
1.4 Rate yourself as an English learner in general
Please choose one:
1.5 Rate yourself as an ESL writer
Please choose one:
1.6 In your opinion, how beneficial is autonomous
learning in writing classes?
Please choose one:
Part Two: Teacher’s Feedback
2.1 How important is teacher’s feedback in writing
classes?
Please choose one:
Why? (optional, type your
answer in box):
2.2 How often do you follow teacher’s comments on
your revisions?
Please choose one:
Why? (optional, type your
answer in box):
180
2.3 From your experience, how useful is teacher’s
feedback in improving your English composition?
Please choose one:
2.4 How much attention do you pay to teacher’s
comments on your compositions?
Please choose one:
Why? (optional, type your
answer in box):
2.5 What aspect(s) of your writing do you expect your
teacher to comment on?
Please choose one:
If other please specify? (type
your answer in box):
2.6 How frequently you receive comments from your
teacher in writing classes?
Please choose one:
2.7 Are teacher’s comments easy to understand?
Please choose one:
Why? (optional, type your
answer in box):
2.8 Are teacher’s comments applicable?
Please choose one:
Why? (optional, type your
answer in box):
2.9 How do you prefer the attitude of your teacher’s
comments on your revisions?
Please choose one:
Why? (optional, type your
answer in box):
2.10 How do you prefer the attitude of your teacher’s
comments on your final draft?
Please choose one:
Why? (optional, type your
answer in box):
From your experience, how different ESL writing courses in the department are from writing courses in formal education in terms of:
2.11 Teachers’ Control Please choose one: How? (optional, write your answer in box)
2.12 Students’ Involvement in
Decision Making
Please choose one: How? (optional, write your answer in box)
2.13 Encouraging self-learning
(i.e. Interactions I/II)
Please choose one:
How? (optional, write your answer in box)
2.14 Nature of Writing Tasks Please choose one: How? (optional, write your answer in box)
2.15 Focus of Subsequent
Revisions
Please choose one:
Can you give examples of errors treated by your
ESL writing instructor? (write your answer in
box)
Part Three: Peer Feedback
3.1 Would like to see more students’
involvement in the ESL writing classes?
Please chooe one:
Why? (optional, write your answer in
box)
3.2 How frequently have you been
involved in group work in ESL writing
classes?
Please choose one:
3.3 How frequent have you been in pair
work in ESL writing classes?
Please choose one:
3.4 Do your ESL writing curricula
(Interactions I/II) encourage self-
learning?
Please choose one:
Would you like to explain more?
(Optional, type in box)
3.5 Does your ESL writing curriculum
(Interactions I/II) encourage group
work?
Please choose one:
Would you like to explain more?
(Optional, type in box)
181
Please read the following statements and decide how strong you agree/ disagree with them:
3.6 Collaborative
3
learning is an important aspect of ESL classes.
Please choose one:
3.7 Autonomous
4
learning is an important aspect of ESL writing
classes.
Please choose one:
3.8 Teachers are the most credible source of feedback in ESL writing
classes.
Please choose one:
3.9 I expect teacher’s comments on all my writing tasks.
Please choose one:
3.10 Teacher’s feedback helps me improve my ESL writing skills.
Please choose one:
3.11 Peer colleagues are capable of providing credible comments on
my writing.
Please choose one:
3.12 I would like to see more peer feedback on my compositions.
Please choose one:
3.13 I can spot most of the writing mistakes of my colleagues’
compositions.
Please choose one:
3.14 I can learn how to write in English by myself using proper
writing material
Please choose one:
Finally, from your own perceptive, please indicate how important the following aspects are in ESL writing courses
4.1 ESL Writing Teachers Please choose one: Explain why? (Optional, type in box):
4.2 The Writing Textbook
Please choose one:
Explain why? (Optional, type in box):
4.3 Collaborative Learning Please choose one: Explain why? (Optional, type in box):
4.4 Individual/autonomous learning
Please choose one:
Explain why? (Optional, type in box):
3
Collaborative Learning: Learners working together is a task.
4
Independent Learning: Self-learning when a student studies without the help of a teacher or other students.
182
Appendix (F) Second Questionnaire: Peer Feedback Group
PEER FEEDBACK GROUP QUESTIONNAIRE
1- How many times have you been involved in peer/group work in this course? (Circle the right answer)
1- Never
2- 1 – 5 times
3- More than 5 times
2- What was your role?
1- Writing the text and receiving comments
2- Reading your colleague’s text and providing feedback
3- All of the above
Can you explain in details how did you complete your tasks during the sessions?
3- How useful were the comments received from your peers? Have you got comments for improvements? Did you
discuss them with your colleagues?
4- What were the comments received from your colleagues about?
1- Global Issues: Logic, organisation of ideas and rhetoric
2- Local Issues: Grammar, spelling and mechanics
3- All of the above
Can you explain more with examples if possible?
6- How much global feedback have you received (organisation, logic … etc)?
1 None 2 A Little 3 Some 4 A lot
7- How much local feedback have you received (grammar, spelling and punctuation)?
1 None 2 A Little 3 Some 4 A lot
8- To what extent do you think that peer feedback is a useful technique in ESL writing classes?
6- How easy to understand and apply peer feedback was?
9- Do you believe that peer feedback helps you become an independent learner? How (not)?
10- How reliable peer feedback is as a source of information?
11- Will you recommend integrating peer feedback in future ESL writing classes?
1- Yes 2- No 3- Not Sure
12- Can you explain why you chose your previous answer?
Have you got any further comments? If so, please write them and continue overleaf if required.
TThhaannkk yyoouu ffoorr yyoouurr ppaarrttiicciippaattiioonn
183
Appendix (G) Peer Review Checklist
Peer Review Checklist
Your Name:
The Writer’s:
– What is the topic of the paragraph?
– Did the writer start with the topic sentence? Yes No
– Is the topic sentence appropriate? Yes No
– How many main ideas are there? One Two More than two
– Are there any examples or explanations to support the main idea?
Yes No
– Did the writer use connecting/transition words (e.g. and, in addition … etc)?
Frequently Infrequently Rarely Never
– How many grammatical or spelling mistakes are there?
Many Some Few None
– Did the writer arranged his ideas in a logical way? Yes No
– What is your overall evaluation of the paragraph?
Excellent Needs Improvement Poor
– Have you got any recommendations for improvement? What did you like/dislike about the paragraph
(e.g. ideas, vocabulary, well-supporting examples … etc)?
Continue overleaf if necessary
184
Appendix (H) Entry Test
1
“Studying abroad”
[indentation] Iam studying at king Abdulaziz Univ. and I hope to complete my education abroad, when I finish a bechlory. I wish to
study at [the] United States. Especially at San francisco, because I like that state, and I like [the] Golden Gate, there. You know?! Iam
really interest for that day, because I know; I will enjoy [it] there, and I will get my advantages.
Content, Rhetorical Structure and Organisation:
Some occasions of inappropriate vocabulary/expression use.
Basic sentences with no connection/transition words.
There is logical transition of ideas but due to repeated linguistic and mechanics errors, that becomes hard to notice.
Language Conventions:
2
I study at aspecific university because I like aspecific university and because I like [to] live near from my family and my friends
and a specific unvi. has a seam my ideas and my opinions but if I go to studing abroad I well meet a diffirent ideas and opinions and
culture and diffirent custems and diffirent live.
Content, Rhetorical Structure and Organisation:
Chaotic in many aspects and really hard to follow.
Excessive and incorrect use of the conjunction ‘and’
Although few reasons why to choose a specific university have been mentioned, the writer did not actually declare his
opinion regarding them, i.e. in favour of them or not. Probably a comparative text but surely does not follow conventions.
Language Conventions:
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 5
Punctuation/Capitalization 13
Run-on Sentences None
Word-count 64
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 6
Grammar/Vocabulary 4
Punctuation/Capitalization 6
Run-on Sentences 2
Word-count 57
Overall Score
2/6
Poor
185
3
[indentation] The reason to study at [a] specific university is to learn more English language and Know the Culter of [the/a]
country. becuse culter is part for [of] learn language, So if you travil to Study English language you must know alot of
words becuse you will leav with a family that they don’t know any thing about your language, So that will make you study
more words that’s will help you and learn you in same time.
Content, Rhetorical Structure and Organisation:
Very basic argument and only two unclear ideas to support it
Simple sentences in terms of grammatical construction and length
Connectors such as ‘because’ and ‘so’ have been used but not always correctly
Language Conventions:
4
The main idea of this topic is the studying abroad or study outside for for example in the United Kingdom or the united state[s]
of America ,its very important because American people they have learn [knowledge] and good educational system.
Content, Rhetorical Structure and Organisation:
The main idea, logical development, and topic sentence are all missing
Very weak rhetorical structure
Very short paragraph consisting of two run-on sentences
Two good reasons to study abroad have been mentioned
Language Conventions:
5
Studying abroad
Studying out side your cantry is very importing because thire [is] [the] best editcation and you can learning more sensers
[sciences?]. After gradowet[ing] from your collage[,] [you] can contunue high editcation and I think some specif find in your univ.
Content, Rhetorical Structure and Organisation:
The main idea, logical development, and topic sentence are all missing
Chaotic in almost every aspect
Very short paragraph, but as far as content is concerned it is hard to consider it as a connected text
Language Conventions:
Type of Error Recurrence
Spelling 8
Grammar/Vocabulary 5
Punctuation/Capitalization 8
Run-on Sentences 2
Word-count 74
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 2
Punctuation/Capitalization 2
Run-on Sentences 2
Word-count 39
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 12
Grammar/Vocabulary 5
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 36
Overall Score
1/6
Very Poor
186
6
Studing abroad
In the last afew years much of [many] students [are] planning to studing abroad, espically in the Kingdom of Saudia
Arabiathin king to Study abroad because the Study broad is much better for many reasons, liKe to want to improve the skills of
language espically when study as a student in [the] English section, many people plans to study in America and in England which the
English as [is] the first language (main languagies English language) [.] Many students start to chose the best University or college to
study in it, because these University or collage gives them the best education and help[s] them to improve themselves.
Content, Rhetorical Structure and Organisation:
Not enough information can be yielded out of this paragraph (despite longer than average in terms of word-count)
Not enough supporting ideas and examples were given
Due to poor linguistic level, the text becomes really difficult to read
Sporadically, the writer tries to connect his ideas using the likes of ‘such as’, ‘especially’, ‘because’ … etc.
Language Conventions:
7
Studying abrod
Studing abroud is important for Undergraduate[s], [and to] for aquire some Skills. That skill make[s] students stronger
from [than if they] study inside his [their] country. In addition [,] the Student can gets anew and Ulttulase [utilise] on make
international relationship.
Content, Rhetorical Structure and Organisation:
A very short paragraph yet shows some good aspects
The argument is better than average and connectors such as ‘in addition’ has been used
Several incorrect uses of prepositions and pronouns
Language Conventions:
8
[Indentation] I have many reasons to make study at [a] university[.] First reson [is] famous [the reputation/fame of the] university[,]
and high class [prestige][,] and have it [having] a good teacher [,] and good room for study [,] and there many servise[s] in university
and many parking [spaces].
Content, Rhetorical Structure and Organisation:
Many sensible reasons have been mentioned to support the idea
Very weak rhetorical structure
Excessive and incorrect use of the conjunction word ‘and’
Type of Error Recurrence
Spelling 9
Grammar/Vocabulary 13
Punctuation/Capitalization 7
Run-on Sentences 3
Word-count 102
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 7
Punctuation/Capitalization 6
Run-on Sentences None
Word-count 33
Overall Score
3/6
Acceptable
187
Language Conventions:
9
[Indentation] I think that studying abroad [is] better from [than] here because if I was there I live with language or mother language
far of my language and may be [experiencing] different culture.
Content, Rhetorical Structure and Organisation:
Extremely short paragraph, or rather one extended sentence.
The theme ‘studying abroad is better’ is clear but only two reasons were mentioned
Language Conventions:
10
Consider[ing] the studying abroad is very important for our life [lives] because us [we] to learn more [,] and learn the language
correct[ly] it and to add new culture for us. Also, there are reasons [that are] very important, e.g. to learn from others’ skills and
experiences.
Content, Rhetorical Structure and Organisation:
The main idea, logical development, and topic sentence are all missing
Very weak rhetorical structure
Very short paragraph consisting of two run-on sentences
Language Conventions:
11
[Indentation] I have many reasons for [to] study at a specif collag, [.] first I want life [to live] with Any People [person] [who] Talk
[speaks] English good [well]. Im need to study at specif collag Becuse [I] study fast and good.
Content, Rhetorical Structure and Organisation:
There is a topic sentence but reasons that were meant to support it are neither convincing nor enough
The development of ideas is very chaotic and poor, and there is no central theme
Repetition of grammatical and other linguistic errors makes reading really difficult
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 9
Punctuation/Capitalization 2
Run-on Sentences 4
Word-count 36
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 3
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 29
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 8
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 41
Overall Score
1/6
Very Poor
188
Language Conventions:
12
Some student[s] go abroad to study may be because doesn’t [the major they want is not] available the study they want or for
they know a new culture or languages. Some student[s] study English or engneering because the study[ing] abroad is more useful
and interisting.
Content, Rhetorical Structure and Organisation:
Really hard to understand due to poor development, grammatical and organisation errors.
Some reasons and examples have been mentioned to support the main idea
Language Conventions:
13
I want to study abroad to improve my language and get skills from other countries. I study in KAAU because it’s the best from
[in terms of] fasilities and every thing and I want to get [a] new future and crieer that’s why I study there.
Content, Rhetorical Structure and Organisation:
Short but provides a good argument supported by reasons
Easy to follow and comprehend
Language Conventions:
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 8
Punctuation/Capitalization 6
Run-on Sentences 1
Word-count 32
Overall Score
1/6
Very Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 7
Punctuation/Capitalization None
Run-on Sentences None
Word-count 39
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 2
Punctuation/Capitalization 1
Run-on Sentences 1
Word-count 42
Overall Score
3/6
Acceptable
189
14
Students want to learn out them cuntry for study in a bag colleges. That’ll make more things good. Like learn[ing] a language
very well. The reasons for studying abroad is [that] the students can find a good whither [environment] with nice colleges. that [is]
what I think.
Content, Rhetorical Structure and Organisation:
Confused ideas and sentences making going through the text challenging
Some reasons why to study abroad but not clear
Language Conventions:
15
I want [to] comlet my study[ies] and know people in Amarcan How they are Live?. And I want a good job. I want stady In my
country and out [of] my country I houw all people. And stady more[.]
Content, Rhetorical Structure and Organisation:
Very weak rhetorical structure and confused sentences and ideas
Hard to understand and follow
Some reasons why to study abroad but no argument
Language Conventions:
16
-Studying abroad-
[Indentation] The study[ing] abroad it is important [as] we need to get high in Education. we want many things abroad not just
specific [incomplete]. Learn[ing] [about] culture[s] get not like this in Saudi [incomplete]. Also [learning to] speak good English […]
etc. Also [,] I get experience in my live[.] I Like [to] study in England because The mother language there [incomplete]. and not
expensive not take alot of money [incomplete]. actully I like this is want to be [the case] after university.
Content, Rhetorical Structure and Organisation:
Inconsistent and unrelated ideas accompanied by many grammatical mistakes
Chaotic sentences, poor style and cohesion and incomplete sentences making speculating what the writer want to say almost
impossible
Language Conventions:
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 4
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 42
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 5
Grammar/Vocabulary 3
Punctuation/Capitalization 3
Run-on Sentences 1
Word-count 36
Overall Score
1/6
Very Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 15
Punctuation/Capitalization 5
Run-on Sentences 2
Word-count 69
Overall Score
1/6
Very Poor
190
17
I have more [many] reasons that make study[ing] at a specific univ or college [incomplete]. one of them [is that] I want to
learn English language and gain Knowledge. It is not very diffecalt, but [I] hope to I can speak and write like English people.
Content, Rhetorical Structure and Organisation:
Good paragraph and well-structured sentences
The sentences look well put together due to appropriate use of transitional words
Good argument and some good reasons mentioned
Language Conventions:
18
Studying abroad in facte is [a] better way to studing English when you need to get information or experience about sumthing
or language. The study[ing] in the contry about the languag for spesefice is verey difecult becuse ther is no wibes to try
testment[ing] your language.
Content, Rhetorical Structure and Organisation:
The sentences are not clear enough and ideas seem disorganised
The second sentence is really difficult to understand
Reasons and explanations are not connected to the main idea of the paragraph
Language Conventions:
19
[Indentation] I think the reasons are important. The colleges at abroad are more famous [recognisable] and more useful. If you go to
study abroad[,] you will see the deffrinces and anther Imprtant reason it is to see other[another] culture [and] you will deal with
deffrint people.
Content, Rhetorical Structure and Organisation:
The topic sentence is not a good one i.e. very broad and general
Reasons and examples to support the main idea are good nevertheless
Language Conventions:
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 6
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 41
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 10
Grammar/Vocabulary 4
Punctuation/Capitalization None
Run-on Sentences 1
Word-count 45
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 5
Punctuation/Capitalization 2
Run-on Sentences 1
Word-count 43
Overall Score
3/6
Acceptable
191
20
Some Students are face[ing] problems at their native countries like low [levels of] learning and expensive price of him
[attending] univesties and schools. although some students are get[ting] better study [education] than [in] their contry.
Content, Rhetorical Structure and Organisation:
Confused sentences and poor rhetorical structure
Some reasons why (not) to study abroad but needs some effort to find out which ones are which
Language Conventions:
21
King Abdulaziz university was my first choice. I chosed it because it is very big and butiful one. I went to another colleges
and I saw the diffrent between my college and the other. And I will be proud to graduate from this university.
Content, Rhetorical Structure and Organisation:
Some unconvincing reasons supplied
Transition between ideas is not smooth due to lack of connection words but still can get the message through
Language Conventions:
22
Study[ing] abroad is good. Because get [it encourages] the student [to] learn and study in real life. The Study[ing] in a specific
college [can be] better than [in] another college. When you want to learn English [,] you should go out [of] your country to U.S.A or
[,] Great britain or any country that has [English as] a native language [where] you can learn English good [well].
Content, Rhetorical Structure and Organisation:
Hard to follow due to poor rhetorical and linguistic construct
Only one reason was mentioned to support the claim than studying abroad is better
Language Conventions:
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 6
Punctuation/Capitalization 1
Run-on Sentences 1
Word-count 39
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 4
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 44
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 10
Punctuation/Capitalization 4
Run-on Sentences None
Word-count 54
Overall Score
2/6
Poor
192
23
There are several reasons that make me choose the King Abdulaziz universety as place of study for being this university [is]
close to my Home and in tearms of the teacher it maight be the main reason and some aqipments Facilities too.
Content, Rhetorical Structure and Organisation:
One paragraph consisted of three run-on sentences
Three good reasons are mentioned but poorly stated
Language Conventions:
24
studying abroad
Every one when go[ing] to study at abroad he must have reasons. first of all [,] [to] discover [a] new culture, [to have]
good education, [to] commenication [communicate] with others [and] finally [to] gets new friends at abroad.
Content, Rhetorical Structure and Organisation:
Good argument and many reasons to support the main ideas are mentioned
Repetition of linguistic mistakes however can make reading and understanding the text challenging
Language Conventions:
25
Studying Abroad
There are many reasons that make study[ing] abroad [incomplete] [run on] one of them [is] to know the other culture
and [,] to get a strong [better] language by contact[ing] the comunity that [is] around you and to develop your skills in speaking,
Writing and listening.
Content, Rhetorical Structure and Organisation:
Despite being very short, some reasons and examples are mentioned
Some effort to connect sentences but the rate of grammatical errors is high resulting in difficult reading
Type of Error Recurrence
Spelling 3
Grammar/Vocabulary 3
Punctuation/Capitalization 2
Run-on Sentences 3
Word-count 41
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 12
Punctuation/Capitalization 4
Run-on Sentences None
Word-count 30
Overall Score
3/6
Acceptable
193
Language Conventions:
26
Studing abroad
There are many reasons to go out studying. I think the main reason is, the study[ing] in abroad is better than here. And
English students have to practice their language.
Content, Rhetorical Structure and Organisation:
Two reasons mentioned but the first lacks justification and supporting examples
Very short but argument seems to go towards the right direction
Language Conventions:
27
[Indentation] We all agree with that, we all must take care about our future [.] also we must be more carefull about that. It’s [That]
mean[s] if we think [consider] to get study[ing] abroad[,] we must choise the best of cours. We [are] looking at good univ. with nice
cumpes and with good facilits in adisition [to] teacher[s] with high degrees in their subject.
Content, Rhetorical Structure and Organisation:
Two different ideas not well connected
Some good reasons mentioned why to study abroad but again not well presented
Rate of linguistic errors is exceptionally high making reading the text challenging
Language Conventions:
28
[Indentation] We have Many reasons to Study at a specific university like perfect grade and Many Courses and so on. We don’t
forget to choose the university such as Oxford.
Content, Rhetorical Structure and Organisation:
No theme, topic sentence or consistent argument, ideas are unrelated to each other
Very short sentences and diverse ideas
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 6
Punctuation/Capitalization 2
Run-on Sentences 1
Word-count 39
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 2
Punctuation/Capitalization 2
Run-on Sentences 2
Word-count 30
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 9
Punctuation/Capitalization 3
Run-on Sentences None
Word-count 58
Overall Score
1/6
Very Poor
194
Language Conventions:
29
[Indentation] Many of Pepole around the globe communicate by [using] English language . so , I think it is important to know how to
speak in english. one of good way to learn this languag is studing abroud in U.S.A or Braitin.
Content, Rhetorical Structure and Organisation:
Although this is a different subject it can be somehow related to the original one
There is a central theme and an example to explain it
Language Conventions:
30
When I study in any college [,] ther is [are] alot of thing[s] I have to care about [consider] it like the name [reputation] of it and
the place of it [its location] [.] al that to get a good knowlg for e.g. when I want to study English it [is] much better to stud [it] in
briten becaus [it] is very good English ther.
Content, Rhetorical Structure and Organisation:
The incorrect choice of vocabulary and punctuation are serious issues making reading the text difficult
Reasons why to study in the UK is not clear
Language Conventions:
31
I study in a specific college for to improive my language , also to descoiver their [the other culture’s] life. On the other hand [in
addition][,] there are many department[s] you can choose. In additions[,] you can see who [what] the People study. Their college[s]
Type of Error Recurrence
Spelling None
Grammar/Vocabulary None
Punctuation/Capitalization 5
Run-on Sentences None
Word-count 29
Overall Score
1/6
Very Poor
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 4
Punctuation/Capitalization 3
Run-on Sentences None
Word-count 40
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 6
Grammar/Vocabulary 10
Punctuation/Capitalization 2
Run-on Sentences 1
Word-count 56
Overall Score
2/6
Poor
195
have many subject[s] and since [fields of sciences], so they can get more knoledge.
Content, Rhetorical Structure and Organisation:
The writer tries to connect his sentences using ‘on the other hand’, ‘in addition’ … etc but in one occasion it was done incorrectly
Language Conventions:
32
Studying abroad
[Indentation] study[ing] outside for me is much better because I will study english litrture So when I get it from the source [it] is
More useful, and so on [therefore/as a result] I can improve my language.
Content, Rhetorical Structure and Organisation:
There is a topic sentence but the arrangement of ideas is rather chaotic
No clear focus or main idea
Language Conventions:
33
[Indentation] The most importante reasons is [are] to get [a] high position in the work[place] you will get and [to] have more
experance to be [a] good English speaker. And to have good education because the atmosfer [environment/surrounding] in home
[is] not like [that] abroad.
Content, Rhetorical Structure and Organisation:
Some effort to state reasons and explanations
Issues with definite/indefinite articles as well as vocabulary choice
Language Conventions:
34
[Indentation] I am realy want to study outside [in a country] such as America or in another country , Because I want to learn English
very well and there are people [who] speak English good and I will learn very fast Because all the time I will speak English with a
good English speaker.
Type of Error Recurrence
Spelling 3
Grammar/Vocabulary 9
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 50
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 3
Punctuation/Capitalization 6
Run-on Sentences 1
Word-count 31
Overall Score
1/6
Very Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 9
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 37
Overall Score
3/6
Acceptable
196
Content, Rhetorical Structure and Organisation:
The idea is clear but not enough reasons were given
The text is somehow connected but punctuation errors can be an issue especially when it comes to the end of an idea and the
begging of another.
Language Conventions:
35
The reason I am studying here is that I didn’t have much choise but to be here, I wish that I was studying what I study in an
English environment sorrounded by native English speakers. Finally, I can complain because I like teachers.
Content, Rhetorical Structure and Organisation:
Good paragraph in terms of flow of ideas rhetoric, and logical organisation
Meaning has been conveyed in a suitable manner
More of a personal account than a descriptive paragraph
Language Conventions:
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 3
Punctuation/Capitalization 3
Run-on Sentences 2
Word-count 39
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary None
Punctuation/Capitalization None
Run-on Sentences None
Word-count 43
Overall Score
4/6
Good
197
Appendix (I) Exit Writing Test
(PEER FEEDBACK GROUP)
1
I like living in big cities. This is because lifestyle in big cities is different from lifestyle in small towns. My preference for the big
cities [is] due to the more advantages they have [offer] than small towns [do]. One of these advantages is plenty [the abundance] of
service[s] that produced in these cities, such as, transportations, telecommunications, health facilities, educational institutions, […]
etc. With regard to how to spend leisure time, I think big cities give you a lot of options in that field.
Content, Rhetorical Structure and Organisation:
Very good supporting examples and explanation provided
Sentences seem well-connected due to appropriate use of transition words
Some unnecessary repetition and incorrect vocabulary and phrase choice
Language Conventions:
2
I prefere living in small towns. First of all, they are very peaceful. Second, the people usually know each other well and they have
connections with each other. The weather [air] is better and healthier. I also like the green space although most of the year plants
do not grow because the rain is very little. In addition, most services are available including shopping malls, hospitals, schools and
others. The city has its advantages but I still prefer a small town.
Content, Rhetorical Structure and Organisation:
Good reasons included. The text despite short is well connected due to proper use of transition words.
Good organisation: topic sentence – body – concluding sentence.
Language Conventions:
3
I never had the chance to try and expirance the living in a small town. However, I could imagine how it is going to be. For me as a
person, I would like to have all the services around me. That will not be avilable in a small town. Furthermore, you won’t be able to
have a good job in the small town. On the other hand, the big city got every thing you need such as sirvces, jobs, health care and
schools. As a result, I would brefere the big city to live in.
Content, Rhetorical Structure and Organisation:
Well connected sentences due to appropriate use of connecting words and phrases
The logical development is convincing enough, and the reasons and examples provided are valid
However, some areas of overgeneralisations are spotted (e.g. you cannot get a good job in a small town). You could have used words
such as: usually, in many cases, generally speaking … etc to leave some space for different points of view
Language Conventions:
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 6
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 79
Overall Score
5/6
Very Good
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 1
Punctuation/Capitalization None
Run-on Sentences None
Word-count 80
Overall Score
5/6
Very Good
198
4
[indentation] Small town is the best please to live in. That [is] because you obtien healthy environment, more secure [security] and
you don’t need to use transportation alot. In this easy [essay] I will discusse why is living in small town is good choise. In my opinion
[,] living in [a] small town is the good oprtonity to healthy air. That [is] because [in] the small twon usualy there [are] no factories or
crowed[s] of cars in it. In addition, the small town usualy [has] all the services is close to you. Therefore[,] you don’t have to use the
transportation alot. Moreover, the small town is more secuor comper [compared] to big twon. For example, Hull twon is more
secuor than London. In conclusion, small twon is the great please to live for many reason[s] [:] healthy environment, more secuor,
and all the servies are close to you any time [anytime] without using the transportation.
Content, Rhetorical Structure and Organisation:
Extended piece of writing that can be shortened if repeated ideas were omitted
Three valid reasons why a small town is a better place to be but repetition can be omitted
The flow of ideas is good but there are many occasions were unnecessary repetitions are committed
Language Conventions:
5
[indentation] Although we know that a big city is more [much] better than a small town [missing independent clause]. Even though
a small town is more safety [safer] than [a] big city because you will not find the foriegn people who have nothing to do. Firstly, a lot
of people prefer to live in a big city for many reasons [run-on] one of them [is that] the transportation is faster than [in] a small town
because in the big city alot of people working. So because of this [hence] faster. Even though [On the other hand] my knowledge
about [experience of] [a] small city is feeling more comfortable because [of] the freedom. Secondly, the big city is more complex for
new people. Finally, I prefer that the big city [as it] is much better than [a] small town.
Content, Rhetorical Structure and Organisation:
Some argument has been established but not to the required effect
Despite numerous occasions of using connecting words/phrases, some of them were incorrectly used
There can be a logical order of ideas but grammatical mistakes can well hinder the meaning in addition to unclear senteces and ideas
such as that of ‘freedom’ in the second to last sentence.
Language Conventions:
Type of Error Recurrence
Spelling 3
Grammar/Vocabulary None
Punctuation/Capitalization None
Run-on Sentences None
Word-count 95
Overall Score
4/6
Good
Type of Error Recurrence
Spelling 13
Grammar/Vocabulary 12
Punctuation/Capitalization 4
Run-on Sentences None
Word-count 144
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 14
Punctuation/Capitalization 1
Run-on Sentences 1
Word-count 118
Overall Score
3/6
Acceptable
199
6
Nowadays the living is more related to [governed by] economic[s] and politic[s]. It could be considered that life currently is more
difficult because both town and city includ[e]ing positive and negative points. This essay will discuss the advantages and
disadvantages concern[ing] town and city. With regard to the technology, it is really different between them. It might be aprove
[believed?] many facilities [exist?] in the city such as restaurants, universities and health clubs. In contrast, town and villages is [are]
more safety [safer] and more comfurtable. Overall, in my opinion I suggest [recommend] people to live in city and I think in the
future people will move on to cities.
Content, Rhetorical Structure and Organisation:
Good arguement in favour of city but somehow confused with the comparative genre used
Some transition words were used so the text seems connected although this could be improved
Well organised ideas despite the long introduction
Language Conventions:
7
There are many reasons that makes me live in the big city. First of all, it is more comfortable as most shops are around you.
Secondly, jobs are more common there. Also, you can start your own business easily. There are many people there. Finally, schools
and hospitals are available more than [in] small towns. In short, big cities are [a] better place to be.
Content, Rhetorical Structure and Organisation:
Short text but answers the question well, most of the ideas are concentrated around one theme, business
Basic but correct use of connecting words making it more of a pragraph than list of sentences
The organisation and the format of a paragraph are both good
Language Conventions:
8
With the improvement in all over the world, it became more difficult to live alone because people need to be on touch with each
other. In addition, they like to find [a] place where all their needs [are] around them such as hospitals, shopping centres and other
important ways [facilities]. I prefer to live in the big city rather than [a] small city [one: avoid repitition] for three reasons: firstly, I
like to be as a part of [a] huge society. Next, all people’s needs [can be found] in a big city. Finally, easier [better] transportation
ways compared with [that of a] small city.
Content, Rhetorical Structure and Organisation:
Rather too general topic sentence, something more direct, right to the point will be more suitable
Incorrect vocabulary choice can obscure the intended meaning render the text hard to follow at some points
Organisation, apart from the first two sentences, seems somehow logical and follows a pattern
Language Conventions:
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 11
Punctuation/Capitalization None
Run-on Sentences None
Word-count 109
Overall Score
4/6
Good
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 2
Punctuation/Capitalization None
Run-on Sentences None
Word-count 63
Overall Score
4/6
Good
Type of Error Recurrence
Spelling None
Grammar/Vocabulary 11
Punctuation/Capitalization None
Run-on Sentences None
Word-count 90
Overall Score
3/6
Acceptable
200
9
Living in a small town or in a big city has advantages and disadvantages. In my opinion, living in a small town is better in terms of
saving money. If you live in a small town, you can find cheap accomodation and [,] cheap shops and cheap transportation. In
contrast, living in a big city will be more expensive. Furthermore, living in a small town which usually has less population than big
cities will save time because it will be uncrowded and traveling from [a, one] plase to plase [another] will be much easier. On the
other hand, living in a big city will be more enjoyable and you can find lots of entertainment, more jobs, more shops … etc. To
conclude, I prefer small towns much more than big cities
Content, Rhetorical Structure and Organisation:
Very good ideas and examples to support your preference
The sentences are well connected thanks to appropriate use of transition words and phrases
Logical progress of ideas throughout the paragraph
Language Conventions:
10
I prefer to live in [the] city for many resons including availability of shops and services. In fact, most people like to live there
because it [is] more convenient and easier. As a student, I also need to live in the city because the university is here in Jeddah. I
know small towns can be good for some people like [those] who look for pease of mind and no congestion. In conclusion, city is still
better for me and I will prefer to stay here.
Content, Rhetorical Structure and Organisation:
Short but informative text. The ideas are clear enough the argument seems focused enough.
Could be improved in terms of rhetoric if more transition words were used.
Language Conventions:
11
[indentation] Here is why I prefere to live in big city. First of all, there are all the services around you. Secondly, jobs are better and
well paid here. In fact, that was the reason why my father moved from his old town to the city. Third, the city unlike the town has
many people so there can be better chances for businesses and [,] shops […] etc. Finally, city has good [means of] transportation like
airports and now trains. City in most ways is the best place.
Content, Rhetorical Structure and Organisation:
Short but informative text. The ideas are clear enough the argument seems focused enough.
Could be improved in terms of rhetoric if more transition words were used.
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 2
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 127
Overall Score
5/6
Very Good
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 3
Punctuation/Capitalization None
Run-on Sentences None
Word-count 83
Overall Score
4/6
Good
201
Language Conventions:
(TEACHER FEEDBACK ONLY: CONTROL GROUP)
1
[indentation] I think that this question [is] defcult to answer it because there is [are] many advantages and disadvantages in both
[the city and the town]. but what I prefer is to liv[e]ing in [a] big city and I will write reasons in points: 1) living in [a, the] big city that
mean[s] you will find more facility[ies], 2) living in big city that mean [it also means: AVOID UNNECESSARY REPITITION]there is [are]
more people will [to] meet them [,] so [you can] improve your self, 3) we will bulid more social life with others, 4) we will increase
[enrich] oure culture because to many people will visite the pig city[,] for example [to] learn other [another] language, 5) you can
make other [more] besniss and make more profite because there is much [are more] people, 6) finally[,] all people in the same
country our [or?] outside of country there consinterate or focuse in the big city even in (economy, social, culture or education) [NOT
CLEAR].
Content, Rhetorical Structure and Organisation:
Detailed explanations and reasoning why to choose living in a big city
Although ideas were arranged in points, I would expect more of a paragraph format with transition words
Good organisation moving from important reasons to less important ones, some are repeated though
Language Conventions:
2
[indentation] I think that living in a small town is better than living in a big city according to [because of] the first important thing
which is the social relationship between the citizen[s], that shows [is evident] with each body [everbody] in their occasions and [run-
on sentence] easy life [relaxed lifestyle]with out any complex requirements. The purity [of] air and less [low levels of] pollution in the
small town are the main factor[s] to prevent diseases and have a good public health. On the other hand [,] the big city has
diconnected relations, pollutions [high levels of pollution], hard working and many requirements. I have a good experience in
working and living in a city but now I live in a small town and nothing present the morning and green fields and tree and easy life
[relaxed, comfortable lifestyle] in th town.
Content, Rhetorical Structure and Organisation:
A good argument in favour of living in a small town, reasonably supported by reasons and examples.
Some effort to incorporate transition words and phrases but could be improved
Mainly consistent in terms of organisation but this could have been improved by omitting repeated ideas.
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 1
Punctuation/Capitalization 3
Run-on Sentences None
Word-count 84
Overall Score
4/6
Good
Type of Error Recurrence
Spelling 7
Grammar/Vocabulary 25
Punctuation/Capitalization 14
Run-on Sentences 1
Word-count 134
Overall Score
4/6
Good
202
Language Conventions:
3
Living in a big city versus small town
Each one has it’s [its: USE POSSESSIVE PRONOUN NOT SHORTENED PRO+AUX] own benefits. In a big city, you’ll find all the services
and supplies available. Supermarkets, airports, zoo [… etc]. In a small town you’ll find some services like groceries but still you’ll
need things from the city over and over. In [On] the other hand, living in the town can be very quite and easy, may be that’s why
some inventors prefer towns to focus on their projects. In my opinion [,] living in a town near a big city can offer you the advantages
of the two places. Quite living in the district and the neibourhood [neighborhood] and the other services ain’t [are not] so far either.
Content, Rhetorical Structure and Organisation:
Good reasons provided to support your claim. Use full versions of auxiliaries e.g. it is not it’s
Sentences are well connected due to proper use of transition words
The logical development is convincing enough.
Language Conventions:
4
[indentation] Living in a big city is always my preferred choice because there I can accompolish my ambisious [ambitions] and
dreams. I believe we discuss a contraversial subject and people thoughts and needs is differ [are different]. but in my opinion[,] will
offer higher advantages than a small town. In a big city [,] you can find various job oppurtunities and all government services [run-on
sentence] also, you can directly contact with decision makers. But what I think the most important (for me) is [that] my whole family
is living there. Considering my ambisious [ambitions], I always dreamed to be effectively enfluence [influential] in my country[‘s]
improvement and development [run-on sentence] and ofcourse it will start from a jammed [?] big city.
Content, Rhetorical Structure and Organisation:
Many good ideas and reasons but the scope is much wider than to be covered in this limited space
Despite logical arrangement of ideas, this could have more improved if connecting/transition words were also used
Language Conventions:
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 10
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 122
Overall Score
4/6
Good
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 4
Punctuation/Capitalization 5
Run-on Sentences None
Word-count 124
Overall Score
5/6
Very Good
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 5
Punctuation/Capitalization 13
Run-on Sentences None
Word-count 110
Overall Score
4/6
Good
203
5
[indentation] [I prefer] Living in a big city, because [there are] more places and buildings. A big city is better than [a] small town
whitch [has] shopping [centers, facilities] and many company[ies], [run-on sentence] [the] downtown is center of city, it [which] has
many real point[s] [of interest] and a center which [it] work [is open] for 24 hourse. The future will come to a big city and a lot of
people think that reasons for many country [I DON’T UNDERSTAND]. It is life to living for future and after that the education in a big
city [is] better than in a small town. For real face it the problem [I DON’T UNDERSTAND]. the big city has a tower and [a] center
shopping [shopping center], [run-on sentence] it is important for a people whoes come from out of the city , the[y] found it [look
for] a map for [of] the city and search it some point to have a fun or work. Airport must be face [cope with] the future to reseve a lot
of people for [the] develop[ment] [of] the city. It will be that.
Content, Rhetorical Structure and Organisation:
Confused ideas and organisation. Cannot find any sense of logical development
Run-on sentences and absence of transition words mean weak rhetorical structure
Only shopping facilities and better education can be valid arguments but excessive grammatical mistakes and repetition make
reading this text challenging
Language Conventions:
6
Everything in all over the world has positive and negative aspects. Generally, living in a small city is a simple life [.] for example,
the people know each other. It is easy to move from one place to another. In addition, the goods prices are lower than [their
counterpart in the] city goods prices. Moreover, the pollution percentage [level] in the air is less than the air [that] in city. In [On] the
other hand, there are many benefits to live in a big city such as, people can find many choices for their needs. In addition, people
believe that [the] big city [is] easier than small city in transportation for example, they can find airports, metro systems … etc.
Additionally, the live [life] quality is very high in big cities combired [compared] with it in small cities. Actually, I prefer to live in a
small city near to a big city to spend my free time in it.
Content, Rhetorical Structure and Organisation:
Well-written and fluent comparative text that shows good knowledge about the subject discussed and logical development
throughout the paragraph
Very good examples provided to support ideas and well organised, well connected sentences
Language Conventions:
7
[indentation] The living in a big city [is] better than from [in a] small town because the people feel with confortabnle and safty [safe]
[run-on] and [they] find in the big city big buildings and also find pridges and find a range for street but the big city is very crowdy
[crowded] and a lot of cars. The small town [is] distinguished [because of] quite living and don’t find crowdly [no congestion]. To me
I see in the living [I prefer to live in]the small town [which] is good and I will support my idea by this short story. My friend was going
to work suddenly the cars stoped and wating a lot of frome 30 minut and felt distressed and returned home and don’t bring his work
in that day.
Content, Rhetorical Structure and Organisation:
Confused ideas and random choice of sentences.
Absence of transition words result in weak rhetorical construction.
However, some plausible reasons mentioned to support the claim. The example story is less convincing.
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 22
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 150
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 10
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 147
Overall Score
5/6
Very Good
204
Language Conventions:
8
[identation] The city is better than [a] small town in my opinion for many reaosns. Most importantly, it has everything you might
need. Things like hospitals, schools, shopping malls … etc. I also like entertainment in the city like going to the cafes and resturants[,]
and also going to arcade games. There are problems like traffic jams and smoke but if you life [live] in new areas there are usually
less people but [they are] still close to everything. Another important thing is transportation as there are thousands [of] taxis in here
but [one, you] rarely [find(s)]any taxis in small towns. Finally, I think more and more people will choose to live in cities.
Content, Rhetorical Structure and Organisation:
Clear argument and explanations. Well-connected sentences and ideas. Logical transition of ideas.
Language Conventions:
9
[indentation]Many people like the city and many like the town. The city has better life style and more opportnities. There are also
many hospitals and schools in the city. Towns have less service[s] but there [they] are there. The town is not crowded and the air is
clene this is why old people like it. City offer[s] entertainment which make[s] young people like it. In my opinion, the city is better for
young people and the town is better for old people.
Content, Rhetorical Structure and Organisation:
Good comparative argument. Sentences are not well connected due to the absence of transition words
The logical development is acceptable but there are many ideas in such a short text.
Language Conventions:
Type of Error Recurrence
Spelling 6
Grammar/Vocabulary 13
Punctuation/Capitalization 1
Run-on Sentences 1
Word-count 114
Overall Score
2/6
Poor
Type of Error Recurrence
Spelling 1
Grammar/Vocabulary 6
Punctuation/Capitalization 13
Run-on Sentences None
Word-count 109
Overall Score
5/6
Very Good
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 4
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 81
Overall Score
4/6
Good
205
10
[Indentation] In my openion [,] city life is the most best [better] compared to countreside’s life. The serfices people need are all
there [,] for example hospitals, schols, shops and etc. However, the town is better in term of quite [peace and quiet], enviroment
and safe neighbours [neighbourhood]. I want to live in the city until I retaire then I will move to [a] big house in the town. I know
must of the people I know want to do that as well when they finish their work in the town.
Content, Rhetorical Structure and Organisation:
Some comparative genre. Sentences are not connected due to the absence of transition words
The logical development is acceptable but there are many ideas in such a short text.
Language Conventions:
11
Some people prefere the city life including me but others like the town more. I think the city is very interesting and offer[s]
many[,] many advantages for young people. Also, services are avilable every where like hospitals, gyms, malles, coffee bars,
resturants, … etc. There might be on the other side problem[s] like safety, drugs, crime. Other problems include pulotion and smoke.
Town life is healthy but boring. In conclusion, I believe I’ll live in the city because it is better in general and because I want to find a
better job.
Content, Rhetorical Structure and Organisation:
Good argument in favour of city life but not real comparison made between town and city. The transition of ideas is acceptable so is
the argument. Two transition words used but this could have been improved.
Language Conventions:
12
[indentation] Most people like city life for many resons. If we look at the town we can say it is quiet, green and has better safety
[safe] but if you need to find a good job and good services you will choose the city. I for example came from a small village to the
Jeddah because I want to make a change to my life. The main reason is that there is not any university in my village. Problem[s] with
the city are also a lot especially expensive life, crimes and pullution.
Content, Rhetorical Structure and Organisation:
Good comparative argument. Sentences are not well connected due to the absence of transition words
The logical development is acceptable but there are many ideas in such a short text.
Type of Error Recurrence
Spelling 7
Grammar/Vocabulary 4
Punctuation/Capitalization 5
Run-on Sentences None
Word-count 83
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 6
Grammar/Vocabulary 2
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 92
Overall Score
4/6
Good
206
Language Conventions:
13
[Indentation] Many people life[live] in the city because cityes are [the] best place. Comparing [ed] to town [,] you can find every
thing you’re looking for nearby. Town[s] are for farmers and people who can’t pay much money in the city. Moreover, towns are
quieter and some people think the air is better because there is [are] not many cars and car jams. Towns usually don’t have many big
markets and shops. If you stay in one of the town[s] for long time you will be alone because you[r] friends will go to big cityes.
Content, Rhetorical Structure and Organisation:
Rate of local errors made it difficult to follow the intended meaning. No conclusion and no transition words. 3
rd
sentence needs
further explanation.
Language Conventions:
14
[indentation] Some like the city and some like the towns. I actualy chose big city like jeddah and riyad because I live there. My family
and friend are there too. For many year[s] people in villeges and towns went [have left] to [the] city because they have all [what]
they need like service[s], hospitels and school[s]. [The] City have [has] big [wide] road[s] and many shops but can have many traffic
jams too. Towns are healthy but not many people like it [them] because they want good job[s]. Also good schools and [,] hospitels
and road[s].
Content, Rhetorical Structure and Organisation:
Very simple ideas, the argument is not clear enough and the conclusions is not clearly stated. Sentences 2 and 3 need further
clarification. Many spelling mistakes and wrong word-choice making it difficult to follow.
Language Conventions:
Type of Error Recurrence
Spelling 2
Grammar/Vocabulary 4
Punctuation/Capitalization 1
Run-on Sentences None
Word-count 90
Overall Score
4/6
Good
Type of Error Recurrence
Spelling 3
Grammar/Vocabulary 7
Punctuation/Capitalization 2
Run-on Sentences None
Word-count 90
Overall Score
3/6
Acceptable
Type of Error Recurrence
Spelling 4
Grammar/Vocabulary 16
Punctuation/Capitalization 5
Run-on Sentences None
Word-count 86
Overall Score
2/6
Poor
207
Appendix (J) NVivo Interviews Results
NAME: DIFFICULTIES IN USING PEER FEEDBACK
Reference 1 – 1.50% Coverage
they are at around my level in English so I don’t expect them to correct all language errors
Reference 2 – 1.34% Coverage
I also might find it difficult to comment to someone’s writing if I don’t know him
Reference 3 – 0.67% Coverage
my colleague may give me the wrong answer
Reference 1 – 1.14% Coverage
in most cases the level of your peers is about your own if not less
Reference 2 – 1.89% Coverage
You can however benefit from discussion and arguing with your peers but not always from the comment themselves.
Reference 1 – 1.95% Coverage
[students] still don’t possess the linguistic and intellectual knowledge required to produce high quality texts
Reference 1 – 1.20% Coverage
much less though from those who were not so good
Reference 1 – 1.71% Coverage
I think [peers’] comments won’t be as useful as the teacher’s
Reference 2 – 1.18% Coverage
there’s a really high probability of error
NAME: IMPROVED LEARNING AND SOCIAL SKILLS
Reference 1 – 1.40% Coverage
students now can express their ideas more freely and give their opinions to each other
Reference 1 – 1.14% Coverage
You can however benefit from discussing and arguing ideas with your peers
Reference 2 – 1.67% Coverage
I really developed this skill of defending and arguing my ideas in a scientific and systematic way
Reference 1 – 2.67% Coverage
I even had the chance to discuss the feedback with my colleagues more freely and openly than it’s usually possible when the
teachers are only in charge.
Reference 1 – 1.73% Coverage
not depending on the teacher all the time and working with classmates
Reference 2 – 1.23% Coverage
208
developing critical thinking and making arguments
Reference 3 – 1.53% Coverage
some grammatical rules, new vocabulary … etc have developed
Reference 4 – 1.15% Coverage
I now can be an active member of the classroom
Reference 5 – 1.45% Coverage
The time was enough and we have many things to discuss.
Reference 1 – 2.04% Coverage
My writing has improved significantly especially in terms of how to arrange ideas, debate and follow logic
Reference 2 – 4.42% Coverage
I now have better communication skills, the ability to work with others in a group and the ability to provide feedback as well as to
decide on whatever changes are needed.
Reference 1 – 6.38% Coverage
how to deal with others in the classroom and how be an active part of a group. I also guess that some other skills including
defending my own ideas and responding to others’ drafts, skills I will certainly need in the future.
NAME: PEER FEEDBACL PROCEDURES
Reference 1 – 1.73% Coverage
Sure, I have learned a lot from them, both the ones in the textbook and the ones [the researcher] gave us.
Reference 2 – 1.55% Coverage
I think that I received more though because I had four or five forms back but I only filled two
Reference 3 – 1.36% Coverage
I used the form you gave us [checklist]. I check the points as I read their texts.
Reference 4 – 1.67% Coverage
I always tend to be positive and recognise their efforts, I think they received my comments very well.
Reference 5 – 2.56% Coverage
Yes, in fact it helps me navigate though my colleagues’ texts much easier. I also think their feedback regarding my paper is better
when using the checklist.
Reference 6 – 0.70% Coverage
I try to explain what I tried to do to them
Reference 1 – 6.14% Coverage
It [the checklist] draws your attention to issues that were not readily available to me when I start discussing my friend’s paper. It
makes you take care of various aspects of your writing as well as your partner’s papers, it makes your revision sort of comprehensive
and systematic…. Also it makes the question you ask your peers more reasonable and specific.
Reference 1 – 3.13% Coverage
If the comments were written by students who are better than me then yes, I will treat them equally as those of the teacher’s
Reference 2 – 4.91% Coverage
the point of having two students working together on a writing project is to improve our writing skills. I don’t look at it as something
embarrassing as much as something we all can benefit from.
Reference 1 – 3.07% Coverage
209
We exchanged our essays to comment in each other’s writing and we sat together in many instances to discuss the feedback
Reference 2 – 2.63% Coverage
I wrote two pieces that were commented on by my colleagues and I responded to about four writing pieces
Reference 3 – 1.20% Coverage
I used the checklist you gave and the textbook
Reference 4 – 5.80% Coverage
I used the checklist as a guideline but I didn’t follow every point it mentions. I tried to focus on my colleague’s paper and give him
useful comments that I can defend and explain later on more than on following instruction.
Reference 5 – 6.13% Coverage
R: Did you discuss the comments with your classmates?
S5: Yes, especially when I need clarification or if I have an objection to a comment. I also sometimes had to explain an idea to my
colleague or make some changes so it becomes clearer.
Reference 1 – 2.19% Coverage
I always tend to defend my ideas and how I organised them when I sit with them
Reference 2 – 5.12% Coverage
It was very useful because it makes you give specific responses to the drafts. It also draws your attention to the important issues we
need to look at when doing the feedback session
NAME: POSITIVE ATTITUDES TOWARDS PEER FEEDBACK
Reference 1 – 0.52% Coverage
I think it was a good experience
Reference 2 – 1.29% Coverage
I have a more important role in the classroom than just attending and listening
Reference 3 – 0.93% Coverage
comments I received from my colleagues were really useful
Reference 4 – 0.80% Coverage
my friends seem to be better aware of my mistakes
Reference 5 – 1.09% Coverage
if the idea is received from my colleague, it can be more effective
Reference 6 – 1.63% Coverage
students have more time per paper than a teacher so they can write longer and more detailed comments
Reference 1 – 0.94% Coverage
it was a good concept using a different way of learning
Reference 1 – 0.91% Coverage
they might be able to spend more time with the group
Reference 2 – 1.32% Coverage
they give more opportunities for students to discuss their writing problems
Reference 1 – 0.80% Coverage
210
a new and interesting experience
Reference 2 – 1.05% Coverage
can be very useful and helpful to students
Reference 3 – 1.88% Coverage
I benefited a lot from students whose linguistic level was better than mine
Reference 4 – 2.18% Coverage
Good students have better ideas and are well-informed about the subject being discussed
Reference 1 – 0.92% Coverage
it was a new and exciting experience
Reference 2 – 3.81% Coverage
I very much liked the idea that I can now realise how other students perceive my writing. I mean if they understand the meaning I
intended to convey.
Reference 3 – 1.74% Coverage
it’s an unconventional way of learning that I found very interesting
Reference 4 – 2.12% Coverage
the classes become more exciting to me than just listening to what the teacher says
Reference 1 – 4.08% Coverage
students can provide more feedback and more importantly you can discuss the feedback with them which is not always possible
with busy teachers.
Reference 2 – 3.85% Coverage
However, some students might have better options and alternatives and they can explain that to you thus their comments are very
valuable.
NAME: RECOMMENDATIONS FOR IMPROVEMENT
Reference 1 – 2.41% Coverage
students need to be trained more to use this technique and may be the selection of group members should be left to students
themselves instead
Reference 2 – 3.26% Coverage
Another issue that that we students tend to use Arabic in our conversations even in English classes, it would be much better to
encourage students to speak English when they work in groups.
Reference 1 – 1.07% Coverage
teachers still need to train students to use them effectively
Reference 1 – 6.17% Coverage
I don’t think that a teacher should leave it all to students to decide what to do. Such a class will surely result in many problems as
students will carry on the wrong direction and until the teacher becomes aware of it, it then becomes too late.
Reference 1 – 2.73% Coverage
if these comments were carefully written they could be as effective as teachers’ if not even more
211
NAME: APPROVAL OF TEACHER’S COMMENTS
Reference 1 – 0.67% Coverage
The teacher is better and knows much more
Reference 1 – 1.09% Coverage
the comments provided by the teacher are usually more acceptable
Reference 1 – 1.70% Coverage
the teacher knows better because students can make errors themselves
Reference 1 – 4.67% Coverage
Comments by teachers on the other hand should be much more reliable as the teacher is more experienced to do such tasks
efficiently. The teacher is by far the best.
Reference 2 – 1.27% Coverage
teachers are likely to give accurate comments
NAME: DISAPPROVAL OF TEACHER’S COMMENTS
Reference 1 – 1.06% Coverage
it might be difficult to give detailed feedback to every student
Reference 2 – 3.00% Coverage
S1: Yes, in most cases they are right but when I don’t agree with something I try to explain what I tried to do to them.
R: Can you do the same with your teacher?
S1: … I guess not
Reference 1 – 2.87% Coverage
Teachers provide students with comments, extensive comments sometimes, but students still don’t always understand them or how
to respond to them in the right way.
NAME: TYPE OF PEER FEEDBACK COMMENTS
Reference 1 – 2.35% Coverage
Most of the times about grammar but sometimes they write comments about my topic sentence or why didn’t I use a specific word
instead of another
Reference 2 – 1.84% Coverage
Most of the mistakes were grammatical mistakes like misspelled words, wrong subject-verb [agreement], punctuation
Reference 1 – 0.60% Coverage
Mostly regarding grammatical errors
Reference 2 – 1.97% Coverage
most students are very much concerned about their linguistic performance and therefore this will be their main worry
212
Reference 1 – 3.46% Coverage
Their comments on my writing are usually very good and they gave me some new ideas and they also helped me a lot shape my
writing style.
Reference 2 – 2.13% Coverage
how for example to start a paragraph and what to include different ideas in the essay
Reference 1 – 1.00% Coverage
Most of the comments were about grammar
Reference 2 – 1.46% Coverage
there were some few comments about ideas and organisation
Reference 3 – 1.74% Coverage
most of the general comments were encouraging and positive in nature
NAME: UNDESIRED RESULTS IN PEER FEEDBACK SESSIONS
Reference 1 – 1.73% Coverage
I don’t feel that I have improved my English in general and my writing ability to a satisfactory level
Reference 1 – 0.76% Coverage
they still cannot explain everything for us
Reference 1 – 2.51% Coverage
The problem is that what makes students suspect that something wrong is going on in the first place?
Reference 2 – 2.23% Coverage
A student who isn’t good will certainly give information that is always not very reliable
213
Appendix (K) The Interview Consent Form
Research Consent Form
Have you read the information sheet? Yes/No
Have you had the opportunity to ask questions and discuss the study? Yes/No
Have you received satisfactory answers to all your questions? Yes/No
Have you received enough information about the study? Yes/No
Who have you spoken to? _____________
Do you understand that you are free to withdraw from the study:
· At any time
Yes/No
· Without having to give a reason for withdrawing?
Yes/No
Do you agree to take part of this study? Yes/No
Signed ____________________ Date _____________
Name (in block letters) ________________________
(From Kent, 2000: 84)
214
Appendix (L) Topics of the Entry and Exit Tests
The Entry Test:
Write one paragraph explaining your reasons for choosing a specific university over another. You
might also consider choosing a university home or abroad for that matter. Your paragraph
should be about 150-word long. (20 minutes)
The Exit Test:
In about 100 – 150 words, write a short paragraph discussing which is better: living in a small
town or in a big city. You need to support your argument with proper examples, reasons and
evidence. Please do not exceed 30 minutes maximum to complete your text.
215
Appendix (M) Clause Complexity Analysis
Analysis of the exit test follows the systemic dimensions known as ‘taxis’ and ‘logico-semantic
system’. In simple terms, as Halliday (1994) explains, the notion of clause complexity allows us
to fully account for the functional organization of sentences.
Clause: Part of a sentence that contains a subject and a verb, and is joined to the rest of the
sentence by a conjunction.
Elaboration: The elaborating clause restates, comments, exemplifies, or specific in greater
details. The symbol (=) is used to signal Elaboration.
Extension: When the extending clause adds something new, provides an exception, or offers an
alternative. The symbol (+) is used to signal Extension.
Enhancement: The enhancing clause provides circumstantial features of time, place, cause,
condition … etc. The symbol (x) is used to signal Enhancement.
Locution*: Quoted or reported speech. The symbol (“) is used to signal Locution.
Idea*: Quoted or reported thought. The symbol (‘) is used to signal Idea.
Parataxis**: relation between two elements of equal status.
Hypotaxis***: relation between two elements of unequal status
* Absence of ‘locution’ and ‘idea’ relations in this analysis could possibly be explained by the nature of the writing
task which is more descriptive than personal or narrative.
**Arabic Numerals are used to signal parataxis.
***Greek letters are used to signal hypotaxis. The symbol (α) is used for the main clause and from (β) onward for
dependant clause(s).
From: Halliday, M. A. K. (1994) An introduction to functional grammar, 2
nd
edition. London:
Arnold.
216
EXPERIEMENT GROUP: Peer Feedback and Teacher Feedback
PF1
I like living in big cities. This is because lifestyle in big cities is different from lifestyle in small
towns. My preference for the big cities [is] due to the more advantages they have [offer] than
small towns [do]. One of these advantages is plenty [the abundance] of service[s] that produced
in these cities, which include transportations, telecommunications, health facilities, educational
institutions, […] etc. With regard to how to spend leisure time, I think big cities give you a lot of
options in that field.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
The services in big cities 1
Which include transportation =2
n/a
n/a n/a
n/a n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
PF2
I prefere living in small towns. First of all, they are very peaceful. Second, the people usually
know each other well and they have connections with each other. The weather [air] is better
and healthier. I also like the green space although most of the year plants do not grow because
the rain is very little. In addition, most services are available including shopping malls, hospitals,
schools and others. The city has its advantages but I still prefer a small town.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
Services are available 1
Including malls, hospitals =2
n/a
People know each other 1
And they have connections +2
n/a
The city has advantages 1
But I still prefer small towns x2
Plants don’t grow α
Because the rain is little xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
217
PF3
I never had the chance to try and expirance the living in a small town. However, I could
imagine how it is going to be. For me as a person, I would like to have all the services around
me. That will not be avilable in a small town. Furthermore, you won’t be able to have a good job
in the small town. On the other hand, the big city got every thing you need such as sirvces, jobs,
health care and schools. As a result, I would brefere the big city to live in.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
City got everything 1
Such as services, jobs =2
n/a
n/a n/a
n/a n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
PF4
[indentation] Small town is the best please to live in. That [is] because you obtien healthy
environment, more secure [security] and you don’t need to use transportation alot. In this easy
[essay] I will discusse why is living in small town is good choise. In my opinion [,] living in [a]
small town is the good oprtonity to healthy air because [in] the small twon usualy there [are] no
factories or crowed[s] of cars in it. In addition, the small town usualy [has] all the services is
close to you. Therefore[,] you don’t have to use the transportation alot. Moreover, the small
town is more secuor comper [compared] to big twon. For example, Hull twon is more secuor
than London. In conclusion, small twon is the great please to live for many reason[s] [:] healthy
environment, more secuor, and all the servies are close to you any time [anytime] without using
the transportation.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
Small town is better for many reasons 1
Healthy [sic] environment… =2
n/a
n/a n/a
n/a
Small town has healthy air α
Because there are no factories or
crowed [sic] of cars xβ
PROJECTION
Locution
idea
n/a n/a
n/a n/a
218
PF5
[indentation] Although we know that a big city is more [much] better than a small town
[missing independent clause]. Even though a small town is more safety [safer] than [a] big city
because you will not find the foriegn people who have nothing to do. Firstly, a lot of people
prefer to live in a big city for many reasons [run-on] one of them [is that] the transportation is
faster than [in] a small town because in the big city alot of people working. So because of this
[hence] faster. Even though [On the other hand] my knowledge about [experience of] [a] small
city is feeling more comfortable because [of] the freedom. Secondly, the big city is more
complex for new people. Finally, I prefer that the big city [as it] is much better than [a] small
town.
PARATACTIC HYPOTACTIC
EXPANSION
Elaboration
Extension
Enhancement
n/a n/a
n/a n/a
n/a
Transportation is faster is small cities α
Because in the big city a lot of people
working [sic] xβ
The small city feels more comfortable α
Because of freedom xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
PF6
Nowadays the living is more related to [governed by] economic[s] and politic[s]. It could be
considered that life currently is more difficult because both town and city includ[e]ing positive
and negative points. This essay will discuss the advantages and disadvantages concern[ing] town
and city. With regard to the technology, it is really different between them. It might be aprove
[believed?] many facilities [exist?] in the city such as restaurants, universities and health clubs.
In contrast, town and villages is [are] more safety [safer] and more comfurtable. Overall, in my
opinion I suggest [recommend] people to live in city and I think in the future people will move
on to cities.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
n/a n/a
n/a
Life is more difficult α
Because both town and city includ
[sic] positive and negative points xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
219
PF7
There are many reasons that makes me live in the big city. First of all, it is more comfortable
as most shops are around you. Secondly, jobs are more common there. Also, you can start your
own business easily. There are many people there. Finally, schools and hospitals are available
more than [in] small towns. In short, big cities are [a] better place to be.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
n/a n/a
n/a
It is more comfortable α
As most shops are around you xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
PF8
With the improvement in all over the world, it became more difficult to live alone because
people need to be on touch with each other. In addition, they like to find [a] place where all
their needs [are] around them such as hospitals, shopping centres and other important ways
[facilities]. I prefer to live in the big city rather than [a] small city [one: avoid repitition] for three
reasons: firstly, I like to be as a part of [a] huge society. Next, all people’s needs [can be found]
in a big city. Finally, easier [better] transportation ways compared with [that of a] small city.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
They like to find a place where all their needs
[sic] around them 1
Which include hospitals, shopping centres =2
n/a
n/a n/a
I like to live in the big city 1
For I like to be a part of a huge society x2
It became more difficult … α
Because people need to … xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
220
PF9
Living in a small town or in a big city has advantages and disadvantages. In my opinion, living in a
small town is better in terms of saving money. If you live in a small town, you can find cheap
accomodation and [,] cheap shops and cheap transportation. In contrast, living in a big city will
be more expensive. Furthermore, living in a small town which usually has less population than
big cities will save time because it will be uncrowded and traveling from [a, one] plase to plase
[another] will be much easier. On the other hand, living in a big city will be more enjoyable and
you can find lots of entertainment, more jobs, more shops … etc. To conclude, I prefer small
towns much more than big cities
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
n/a n/a
n/a
Living in a small town will save time α
Because it will be uncrowded xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
PF10
I prefer to live in [the] city for many resons including availability of shops and services. In fact,
most people like to live there because it [is] more convenient and easier. As a student, I also
need to live in the city because the university is here in Jeddah. I know small towns can be good
for some people like [those] who look for pease of mind and no congestion. In conclusion, city is
still better for me and I will prefer to stay here.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
I prefer city for many reasons 1
Including availability of shops =2
n/a
City is still better 1
And I will prefer to stay here +2
n/a
n/a
Most people like to live there α
Because it is more convenient xβ
I need to live in the city α
Because the university is here xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
221
PF11
[indentation] Here is why I prefere to live in big city. First of all, there are all the services around
you. Secondly, jobs are better and well paid here. In fact, that was the reason why my father
moved from his old town to the city. Third, the city unlike the town has many people so there
can be better chances for businesses and [,] shops […] etc. Finally, city has good [means of]
transportation like airports and now trains. City in most ways is the best place.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
City has good transportation 1
Like airports and trains =2
n/a
n/a n/a
The city has many people 1
So there can be better chances x2
n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
CONTROL GROUP: Teacher Feedback Only
C1
[indentation] I think that this question [is] defcult to answer it because there is [are] many
advantages and disadvantages in both [the city and the town]. but what I prefer is to liv[e]ing in
[a] big city and I will write reasons in points: 1) living in [a, the] big city that mean[s] you will find
more facility[ies], 2) living in big city that mean [it also means: AVOID UNNECESSARY
REPITITION]there is [are] more people will [to] meet them [,] so [you can] improve your self, 3)
we will bulid more social life with others, 4) we will increase [enrich] oure culture because to
many people will visite the pig city[,] for example [to] learn other [another] language, 5) you can
make other [more] besniss and make more profite because there is much [are more] people, 6)
finally[,] all people in the same country our [or?] outside of country there consinterate or focuse
in the big city even in (economy, social, culture or education) [NOT CLEAR].
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
Many people visit big city 1
For example to learn another language=2
n/a
What I prefer is city 1
And I will write reasons +2
n/a
There are more people 1
So you can improve yourself x2
The question is difficult to answer α
Because there are many advantages xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
222
C2
[indentation] I think that living in a small town is better than living in a big city according to
[because of] the first important thing which is the social relationship between the citizen[s], that
shows [is evident] with each body [everbody] in their occasions and [run-on sentence] easy life
[relaxed lifestyle]with out any complex requirements. The purity [of] air and less [low levels of]
pollution in the small town are the main factor[s] to prevent diseases and have a good public
health. On the other hand [,] the big city has diconnected relations, pollutions [high levels of
pollution], hard working and many requirements. I have a good experience in working and living
in a city but now I live in a small town and nothing present the morning and green fields and
tree and easy life [relaxed, comfortable lifestyle] in th town.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
I have a good experience in big cities 1
Now I live in a small town =2
n/a
n/a n/a
n/a
Living in a small town is better α
Because of social relations xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
C3
Living in a big city versus small town
Each one has it’s own benefits. In a big city, you’ll find all the services and supplies available.
Supermarkets, airports, zoo [… etc]. In a small town you’ll find some services like groceries but
still you’ll need things from the city over and over. In [On] the other hand, living in the town can
be very quite and easy, may be that’s why some inventors prefer towns to focus on their
projects. In my opinion [,] living in a town near a big city can offer you the advantages of the two
places. Quite living in the district and the neibourhood [neighborhood] and the other services
ain’t [are not] so far either.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
n/a n/a
You’ll find some services 1
But you’ll need things from the city x2
Living in town is quite α
Maybe that’s why inventors prefer it xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
223
C4
[indentation] Living in a big city is always my preferred choice because there I can accompolish
my ambisious [ambitions] and dreams. I believe we discuss a contraversial subject and people
thoughts and needs is differ [are different]. but in my opinion[,] will offer higher advantages
than a small town. In a big city [,] you can find various job oppurtunities and all government
services [run-on sentence] also, you can directly contact with decision makers. But what I think
the most important (for me) is [that] my whole family is living there. Considering my ambisious
[ambitions], I always dreamed to be effectively enfluence [influential] in my country[‘s]
improvement and development [run-on sentence] and ofcourse it will start from a jammed [?]
big city.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
This is a controversial subject 1
And people’s thoughts are different +2
n/a
n/a
Living in a big city is my choice α
Because I can accomplish ambitions xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
C5
[indentation] [I prefer] Living in a big city, because [there are] more places and buildings. A big
city is better than [a] small town whitch [has] shopping [centers, facilities] and many
company[ies], [run-on sentence] [the] downtown is center of city, it [which] has many real
point[s] [of interest] and a center which [it] work [is open] for 24 hourse. The future will come to
a big city and a lot of people think that reasons for many country. It is life to living for future and
after that the education in a big city [is] better than in a small town. For real face it the problem
the big city has a tower and [a] center shopping [shopping center], [run-on sentence] it is
important for a people whoes come from out of the city , the[y] found it [look for] a map for [of]
the city and search it some point to have a fun or work. Airport must be face [cope with] the
future to reseve a lot of people for [the] develop[ment] [of] the city. It will be that.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
It’s life in the future 1
And better education +2
A big town is better α
Which has shopping centres +β
n/a
I prefer living in a big city α
Because there are more places xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
224
C6
Everything in all over the world has positive and negative aspects. Generally, living in a small city is a simple life [.]
for example, the people know each other. It is easy to move from one place to another. In addition, the goods prices
are lower than [their counterpart in the] city goods prices. Moreover, the pollution percentage [level] in the air is less
than the air [that] in city. In [On] the other hand, there are many benefits to live in a big city such as, people can find
many choices for their needs. In addition, people believe that [the] big city [is] easier than small city in transportation
for example, they can find airports, metro systems … etc. Additionally, the live [life] quality is very high in big cities
combired [compared] with it in small cities. Actually, I prefer to live in a small city near to a big city to spend my free
time in it.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
There are benefits to live in a big city 1
People can find many choices =2
The life in city is easier 1
People can find airports … =2
n/a
n/a n/a
n/a n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
C7
[indentation] The living in a big city [is] better than from [in a] small town because the people feel with confortabnle
and safty [safe] [run-on] and [they] find in the big city big buildings and also find pridges and find a range for street
but the big city is very crowdy [crowded] and a lot of cars. The small town [is] distinguished [because of] quite living
and don’t find crowdly [no congestion]. To me I see in the living [I prefer to live in]the small town [which] is good and I
will support my idea by this short story. My friend was going to work suddenly the cars stoped and wating a lot of
frome 30 minut and felt distressed and returned home and don’t bring his work in that day.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
Small town is good 1
And I will support my idea +2
n/a
n/a
Living in a big city is better α
Because people feel comfortable xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
225
C8
[identation] The city is better than [a] small town in my opinion for many reaosns. Most
importantly, it has everything you might need. Things like hospitals, schools, shopping malls …
etc. I also like entertainment in the city like going to the cafes and resturants[,] and also going to
arcade games. There are problems like traffic jams and smoke but if you life [live] in new areas
there are usually less people but [they are] still close to everything. Another important thing is
transportation as there are thousands [of] taxis in here but [one, you] rarely [find(s)]any taxis in
small towns. Finally, I think more and more people will choose to live in cities.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
I like entertainment in the city 1
Like going to cafes and restaurants =2
n/a
There’re problems 1
But there’re less people in new areas +2
There’re thousands of taxis in city 1
But rarely any taxis in small towns +2
n/a
n/a n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
C9
[indentation]Many people like the city and many like the town. The city has better life style and
more opportnities. There are also many hospitals and schools in the city. Towns have less
service[s] but there [they] are there. The town is not crowded and the air is clene this is why old
people like it. City offer[s] entertainment which make[s] young people like it. In my opinion, the
city is better for young people and the town is better for old people.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
Many people like the city 1
And many like the town =2
n/a
Towns have less services 1
But they are there +2
The town is not crowded 1
And the air is clean +2
n/a
The town is not crowded 1
This is why old people like it x2
City offers entertainment 1
This is why young people like it x2
n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
226
C10
[Indentation] In my openion [,] city life is the most best [better] compared to countreside’s life.
The serfices people need are all there [,] for example hospitals, schols, shops and etc. However,
the town is better in term of quite [peace and quiet], enviroment and safe neighbours
[neighbourhood]. I want to live in the city until I retaire then I will move to [a] big house in the
town. I know must of the people I know want to do that as well when they finish their work in
the town.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
n/a n/a
n/a n/a
PROJECTION
locution
idea
n/a n/a
n/a n/a
C11
Some people prefere the city life including me but others like the town more. I think the city is
very interesting and offer[s] many[,] many advantages for young people. Also, services are
avilable every where like hospitals, gyms, malles, coffee bars, resturants, … etc. There might be
on the other side problem[s] like safety, drugs, crime. Other problems include pulotion and
smoke. Town life is healthy but boring. In conclusion, I believe I’ll live in the city because it is
better in general and because I want to find a better job.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
Some people prefer the city 1
But others like the town more +2
The city is very interesting 1
And it offers many advantages +2
n/a
n/a
I’ll live in the city α
Because it is better in general xβ
And because I want better job x γ
PROJECTION
locution
idea
n/a n/a
n/a n/a
227
C12
[indentation] Most people like city life for many resons. If we look at the town we can say it is
quiet, green and has better safety [safe] but if you need to find a good job and good services you
will choose the city. I for example came from a small village to the Jeddah because I want to
make a change to my life. The main reason is that there is not any university in my village.
Problem[s] with the city are also a lot especially expensive life, crimes and pullution.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
If we look at town 1
We can say it’s quiet +2
Town is quiet and green 1
But if you need good job choose city +2
n/a
n/a
I came from a small village α
Because I want to make a change xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
C13
[Indentation] Many people life[live] in the city because cityes are [the] best place. Comparing
[ed] to town [,] you can find every thing you’re looking for nearby. Town[s] are for farmers and
people who can’t pay much money in the city. Moreover, towns are quieter and some people
think the air is better because there is [are] not many cars and car jams. Towns usually don’t
have many big markets and shops. If you stay in one of the town[s] for long time you will be
alone because you[r] friends will go to big cityes.
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
n/a n/a
Town are quieter 1
And some people think the air is better +2
If you stay in town for a long time 1
You’ll be alone +2
n/a
n/a
The air is better α
Because there are not many cars xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
228
C14
[indentation] Some like the city and some like the towns. I actualy chose big city like jeddah and
riyad because I live there. My family and friend are there too. For many year[s] people in villeges
and towns went [have left] to [the] city because they have all [what] they need like service[s],
hospitels and school[s]. [The] City have [has] big [wide] road[s] and many shops but can have
many traffic jams too. Towns are healthy but not many people like it [them] because they want
good job[s]. Also good schools and [,] hospitels and road[s].
PARATACTIC HYPOTACTIC
EXPANSION
elaboration
Extension
Enhancement
Some people like the city 1
And some like the town =2
n/a
City has wide roads and shops 1
But can have many traffic jams +2
n/a
n/a
I choose a city like jeddah α
Because I live there xβ
People left villages α
Because everything they need is in city xβ
Not many people like towns α
Because they want good jobs xβ
PROJECTION
locution
idea
n/a n/a
n/a n/a
229
GLOSSARY
EFL: EFL is English as a foreign language. It refers to learning a new language in a foreign
language context.
English majors: English majors are university undergraduate students who study general
English for four academic years as an area of specialisation. These students will be
awarded BA upon the completion of the programme.
ESL: ESL refers to English as a Second Language where English is taught or learned in the
environment when it is spoken.
L1: Refers to the native or first language of the subjects.
L2: L2 refers to the foreign or second language learned or taught.
Peer Feedback: Students’ comments on their fellow students work.
Teacher Written-Feedback: The most traditional and common type of feedback in
writing classes where teachers are the sole providers of feedback on students’ writing.
Formative Assessment: Feedback on writing drafts other than the final draft with the
purpose of developing and improving.
Summative Assessment: Feedback on the final version of a text with the purpose of
justifying a given score and/or for subsequent writing projects.
Objectivist Codes: This approach treats words as condensed representation of the facts.
Heuristic Codes: Coding qualitative data in a way that facilitates discovery and further
investigation
Inductive Logic: Moving from the specific to the general.
Deductive Logic: Begins with the general and move to the specific.
Focused Written Corrective Feedback:
What a Replication Study Reveals
About Linguistic Target Mastery
Monika Ekiert, LaGuardia CC, City University of New York
Kristen di Gennaro, Pace University
The Debate
Truscott (1996). The case against grammar correction in
L2 writing classes.
Argued that corrective feedback regarding students’ grammar on writing
assignments was not only ineffective but potentially harmful.
Ferris (1999). The case for grammar correction in L2
writing classes: A response to Truscott.
Strongly objected to Truscott’s claims, stating that such claims are more
harmful to students than error correction.
The Debate
Truscott (1996). The case against grammar correction in
L2 writing classes.
Argued that corrective feedback regarding students’ grammar on writing
assignments was not only ineffective but potentially harmful.
Ferris (1999). The case for grammar correction in L2
writing classes: A response to Truscott.
Strongly objected to Truscott’s claims, stating that such claims are more
harmful to students than error correction.
The Debate
Corrective feedback (CF) remains the most contentious
issue in second language (L2) writing research.
Over 300 published papers have been produced on this
topic.
Research Perspectives
Writing researchers motivated by practical pedagogical
concerns
If WCF is not effective (Truscott, 1996, 2007), then why should
teachers dedicate so many hours providing WCF to their
students?
If WCF is effective (Ferris, 1999, 2004), what are its effects?
Which is the most effective type of WCF?
Research Perspectives
Writing researchers motivated by practical pedagogical
concerns
If WCF is not effective (Truscott, 1996, 2007), then why should
teachers dedicate so many hours providing WCF to their
students?
If WCF is effective (Ferris, 1999, 2004), what are its effects?
Which is the most effective type of WCF?
Researchers in the instructed SLA strand drawn to WCF
for its researchability (Ellis, 2010)
CF is an area where theory and practice interface
WCF can be observed, measured, and controlled
The “article” studies
Effectiveness of WCF on accuracy of article usage
(Bitchner & Knoch, 2010; Ellis et al., 2008; Sheen, 2007)
Why articles?
Unavoidable
Noted difficulty across proficiency levels
Rule-governed uses
Referential indefinite a for first mentions
Referential definite the for subsequent mentions
Teachable
Observable
Measurable
Findings from “Article” Studies
In all “article” studies, treatment groups outperformed
the control groups — evidence in favor of WCF.
Results suggest that WCF has a positive effect on
learners’ accuracy in using articles to express first
mention (a) and subsequent mention (the).
Results appear to contradict Truscott’s (2007) meta-
analysis finding that WCF has no effect, or a slightly
negative effect on learners’ accuracy.
Unresolved Problems: Linguistic Target
“Because there are occasions when the definite article is
required for referring to something for the first time … or for
referring to mass nouns, WCF was not provided on such
occasions” (Bitchener & Knoch, 2010, p. 202).
Unresolved Problems: Linguistic Target
“Because there are occasions when the definite article is
required for referring to something for the first time … or for
referring to mass nouns, WCF was not provided on such
occasions” (Bitchener & Knoch, 2010, p. 202).
There are exceptions to the “rule” students were learning.
Unresolved Problems: Linguistic Target
“… the use of obligatory occasion analysis … meant that the
students were not required to delete articles. … [O]ne effect
of the correction might have been to signal to learners that
they needed to use articles a lot and may have led to errors of
overuse. … It is possible that the correction led to overuse of
articles in contexts that were not the focus of this study and
that did not require the use of an article but this remains an
issue for further study” (Ellis et al., 2008, p. 369, footnote).
Unresolved Problems: Linguistic Target
“… the use of obligatory occasion analysis … meant that the
students were not required to delete articles. … [O]ne effect
of the correction might have been to signal to learners that
they needed to use articles a lot and may have led to errors of
overuse. … It is possible that the correction led to overuse of
articles in contexts that were not the focus of this study and
that did not require the use of an article but this remains an
issue for further study” (Ellis et al., 2008, p. 369, footnote).
Ignored overuse
Further research needed
The Current Study
Aims to fill this gap identified, but underreported, by
previous researchers
Accuracy, in our study, is defined …
in terms of how well an L2 user has learned to use an article
with regard to where it is and it is not required.
Research Questions
1. What is the impact of WCF selectively focused on two
article functions on learners’ accuracy with articles in
other contexts?
2. Do these effects change depending on the type of WCF?
Method
Quasi-experimental design (intact classes)
pre-test → immediate post-test→ delayed post-test
3 groups:
direct feedback group
direct feedback + metalinguistic explanation group
control group
2 types of instruments
Free production and controlled production
Design
Week 1 Weeks 3-5 Week 5 Week 11
Pre-test Treatment Immediate Delayed
x 3 Post-test Post-test
Participants
63 ESL students enrolled in a college-based, academic ESL
program (level low intermediate to intermediate)
3 intact writing classes (the same instructor)
Multiple L1s (Spanish, Chinese, Bengali, Tibetan, Nepali,
Urdu, Hindi, Greek, Creole, Korean, Polish, Arabic, Turkish,
Burmese, Pashtoo)
Group 1: Direct error correction on articles (n=22)
Group 2: Direct error correction and metalinguistic
explanation on articles (n=23)
Group 3: Control; received no corrective feedback on
article errors (n=18)
Focus of WCF
First and subsequent mentions requiring a and the
Jane bought a ring and a necklace for her mother’s
birthday.
Her mother liked the ring, but hated the necklace.
Treatment for Group 1 DF
Direct written error correction:
– incorrect uses with “a” or “the” were corrected
above each error
– “a” or “the” were inserted where they were omitted
but required
Treatment for Group 2 DF + ME
Written meta-linguistic explanation
Students received the following explanation attached
to their piece of writing:
Use “a” when referring to something for the first time.
Use “the” when referring to something that has already been
mentioned.
Illustration of the rule taken from each writing task
A man and a woman went to a restaurant for dinner. The man
ordered a bottle of wine and the woman drank the wine.
Group 3 Control
Students received summary end notes on the overall
quality of their writing (Ferris, 2004, 2006)
No in-text corrections provided
No reference to article use made
Instruments
Designed to meet the following criteria:
written mode
narrative genre
connected discourse
Two types of written tests:
Picture description → free production
Missing word → controlled production
Instruments: Picture description
The accompanying narrative
story was handed to the
students with instructions to
read it silently.
Written stimulus (of
approximately 300 – 400
words) was replaced with the
pictorial stimulus and the
students were asked to write
the story themselves.
Participants were given 30
minutes.
6 forms developed.
Instruments: Missing word
Each narrative (200-300 words
long) was based on an adapted
Aesop fable.
Items were embedded in
sentences forming a coherent
text.
No blanks were provided.
Participants were instructed
to read the fable and insert
missing words wherever they
deemed it necessary–a task
resembling error correction.
Participants were given 20
minutes.
3 forms developed.
Procedures
Pre-test: picture description + missing word
Treatment:
Students received feedback on the picture description
narratives on three occasions (separated by a week)
Immediate post-test: picture description + missing word
Delayed post-test: picture description + missing word
Analysis of WCF
First and subsequent mentions of referents (treatment
focus)
first mentions requiring a (referential indefinites)
subsequent mentions requiring the (anaphoric definites)
BUT ALSO
Analysis of WCF
First and subsequent mentions of referents (treatment
focus)
first mentions requiring a (referential indefinites)
subsequent mentions requiring the (anaphoric definites)
BUT ALSO
First mention definites
Situational
Ex: Pass me the salt.
Cataphoric
Ex: The shade on this
lamp is really ugly.
Nonreferential indefinites
Ex: John is a plumber.
Idiomatic uses of
indefinites and definites
Ex: In a few minutes
Ex: In the meantime
Data Analysis
Omission and misuse were identified
All articles produced by students were coded by
article type (to identify article usage beyond the
treated articles)
Data Analysis
Accuracy
Calculated by means of obligatory occasion analysis (the total
number of correctly supplied articles divided by the total
number of obligatory occasions and expressed as proportions
of 1).
Overuse
Calculated by means of overuse occasion analysis (the total
number of overused articles divided by the total number of
obligatory occasions and expressed as proportions of 1).
Scores on both accuracy and overuse analyzed
with a series of mixed ANOVAs and post-hoc
tests.
Overall impact of WCF on all articles
For all articles, there was a significant change over time
averaged across all groups.
Also, the effect of time varied among the groups
significantly. In other words, different groups developed
differentially over time.
At the immediate post-test, DF+ME and Control differed
significantly from each other.
Overall impact of WCF on all articles
Scores on all articles by group and time
*
(group x time*)
*
*
Impact on ‘treated’ vs. ‘untreated’ articles
For ‘treated’ articles, there was a significant change over
time averaged across all groups.
For ‘untreated’ articles, the effect of time varied among
the groups significantly. In other words, the three groups
developed differentially over time.
At the immediate post-test, DF+ME different significantly
from Control and DF groups on ‘untreated’ articles.
Impact on ‘treated’ vs. ‘untreated’ articles
Scores on “treated”
articles (time*)
*
Impact on ‘treated’ vs. ‘untreated’ articles
Scores on “treated”
articles (time*)
Scores on “untreated”
articles (group x time*)
*
Results on accuracy for each group
Direct Feedback overtime
Results on accuracy for each group
Direct Feedback + Metalinguistic Explanation overtime
Results on accuracy for each group
Control overtime
Results on article overuse
Scores by group and time
Summary of Results
The control group outperformed or matched the two
experimental groups for accuracy on all articles, both first- and
subsequent-mention uses and other article uses.
WCF focusing on only two functions of the article system
inadvertently impacted the remaining functions of the system.
The impact appears to be negative in that, while improving on
the “treated” features, the L2 learners experienced loss of
accuracy on the “untreated” target features.
The provision of partial metalinguistic information may lead to
overuse of a given structure.
Discussion
In the instructed SLA research, target feature selection
deserves an open and honest discussion.
Target feature meanings in relation to their learnability
are rarely considered.; “researchability” is not helpful
here.
Select feature uses, often driven by simple rules, are
targeted by WCF and FonF studies limiting the findings’
generalizability.
Discussion
On a positive note …
Having students engage in writing tasks in which certain
grammatical structures arise naturally and frequently may
be both necessary and sufficient to improve L2 learners’
performance with those structures.
Discussion
Is Truscott right?
How beneficial is WCF if it leads to greater accuracy is
some areas, but greater inaccuracy in other areas?
Writing instructors may need to adjust their expectations
regarding students’ improvement in grammatical accuracy,
including forms that have been corrected and taught; they
may need to be alert to potential overgeneralizations.
Q & A
Thank you!