查看原文
其他

刊讯|SSCI 期刊 《写作评估》2023年第55-58卷

七万学者关注了→ 语言学心得
2024-09-03

ASSESSING WRITING

Volume 55-58, 2023

ASSESSING WRITING(SSCI一区,2022 IF:3.9,排名:13/194)2023年第55-58卷共发文70篇。研究论文涉及第二语言写作能力评估、词汇多样性测量、汉语水平考试、评分标准、写作策略等方面。欢迎转发扩散!(2023年已更完)

往期推荐:

刊讯|SSCI 期刊《写作评估》2022年第52-54卷

刊讯|SSCI 期刊《写作评估》2022年第51卷

目录


Volume 55

■Teaching second-grade students to write science expository text: Does a holistic or analytic rubric provide more meaningful results? by Angenette C. Imbler, Sarah K. Clark, Terrell A. Young, Erika Feinauer, Article 100676.

■A multi-measure approach for lexical diversity in writing assessments: Considerations in measurement and timing, by Kelly Woods, Brett Hashimoto, Earl K. Brown, Article 100688.

■A multidimensional approach to assessing the effects of task complexity on L2 students’ argumentative writing, by Ting Sophia Xu, Lawrence Jun Zhang, Janet S. Gaffney, Article 100690.

■Comparing summative and dynamic assessments of L2 written argumentative discourse: Microgenetic validity evidence, by Ali Hadidi, Article 100691.

■Assessing the writing quality of English research articles based on absolute and relative measures of syntactic complexity, by Yuxi Li, Ruiying Yang, Article 100692.

■Tasks and feedback: An exploration of students’ opportunity to develop adaptive expertise for analytic text-based writing, by Lindsay Clare Matsumura, Elaine Lin Wang, Richard Correnti, Diane Litman, Article 100689.

■Visual thinking and argumentative writing: A social-cognitive pairing for student writing development, by Ross C. Anderson, Erin A. Chaparro, Keith Smolkowski, Rachel Cameron, Article 100694.

■The impacts of self-efficacy on undergraduate students’ perceived task value and task performance of L1 Chinese integrated writing: A mixed-method research, by Yuan Yao, Xinhua Zhu, Siyu Zhu, Yue Jiang, Article 100687.

■Examining the reliability of an international Chinese proficiency standardized writing assessment: Implications for assessment policy makers, by Jinyan Huang, Danni Zhu, Duquan Xie, Tiantian Shu, Article 100693.

■Exploring the development of student feedback literacy in the second language writing classroom, by Tiefu Zhang, Zhicheng Mao, Article 100697.


Volume 56

■Experienced but detached from reality: Theorizing and operationalizing the relationship between experience and rater effects, by Iasonas Lamprianou, Dina Tsagari, Nansia Kyriakou, Article 100713.

■An investigation into L2 writing teacher beliefs and their possible sources, by Mehmet Karaca, Hacer Hande Uysal, Article 100710.

■Your writing could have been better: Examining the effects of upward and downward counterfactual communication on the motivational aspects of L2 writing, by Nourollah Zarrinabadi, Vahid Mahmoudi-Gahrouei, Alireza Mohammadzadeh Mohammadabadi, Article 100714.

■Chinese EFL Teachers’ Writing Assessment Feedback Literacy: A Scale Development and Validation Study, by Yongliang Wang, Ali Derakhshan, Ziwen Pan, Farhad Ghiasvand, Article 100726.

■Feedback on writing through the lens of activity theory: An exploration of changes to peer-to-peer interactions, by Nicholas Carr, Article 100720.

■The relationship between peer feedback features and revision sources mediated by feedback acceptance: The effect on undergraduate students’ writing performance, by Qi Lu, Yuan Yao, Xinhua Zhu, Article 100725.

■Individual differences in L2 writing feedback-seeking behaviors: The predictive roles of various motivational constructs, by Jian Xu, Yabing Wang, Article 100698.

■Developing and evaluating a set of process and product-oriented classroom assessment rubrics for assessing digital multimodal collaborative writing in L2 classes, by Anisa Cheung, Article 100723.

■Human scoring versus automated scoring for english learners in a statewide evidence-based writing assessment, by Yen Vo, Heather Rickels, Catherine Welch, Stephen Dunbar, Article 100719.

■The predictive powers of fine-grained syntactic complexity indices for letter writing proficiency and their relationship to pragmatic appropriateness, by Yuan ke Li, Shiwan Lin, Yarou Liu, Xiaofei Lu, Article 100707.

■Linguistic complexity as the predictor of EFL independent and integrated writing quality, by Yujie Zhang, Jinghui Ouyang, Article 100727.

■Predicting Chinese EFL Learners’ Human‐rated Writing Quality in Argumentative Writing Through Multidimensional Computational Indices of Lexical Complexity, by Yuxin Peng, Jie Sun, Jianqiang Quan, Yunqi Wang, Chunyang Lv, Haomin Zhang, Article 100722.

■The design and cognitive validity verification of reading-to-write tasks in L2 Chinese writing assessment, by Rujun Pan, Xiaofei Lu, Article 100699.

■Developing teacher feedback literacy through self-study: Exploring written commentary in a critical language writing curriculum, by Emma R. Britton, Article 100709.

■Pedagogical values of translingual practices in improving student feedback literacy in academic writing, by Yachao Sun, Ge Lan, Li Zhang, Article 100715.

■Constructing and validating a self-assessment scale for Chinese college English-major students’ feedback knowledge repertoire in EFL academic writing: Item response theory and factor analysis approaches, by Jinyan Huang, Tiantian Shu, Yaxin Dong, Danni Zhu, Article 100716.

■Exploring multilingual students’ feedback literacy in an asynchronous online writing course, by Qianqian Zhang-Wu, Article 100718.

■Towards fostering Saudi EFL learners' collaborative engagement and feedback literacy in writing, by Murad Abdu Saeed, Mohammed Abdullah Alharbi, Article 100721.

■Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy, by Ahmet Serdar Acar, Article 100717.


Volume 57

■Composition Organization and Development Analysis (CODA) Scale: Equipping high school students to evaluate argumentative essays, by Natalia A. Bondar, Susan X Day, Lee Mountain, Laura B. Turchi, Laveria F. Hutchison, Article 100724.

■Diagnosing Chinese college-level English as a Foreign Language (EFL) learners’ integrated writing capability: A Log-linear Cognitive Diagnostic Modeling (LCDM) study, by Kwangmin Lee, Article 100730.

■Assessing self-regulated writing strategies, self-efficacy, task complexity, and performance in English academic writing, by Mark Feng Teng, Ying Zhan, Article 100728.

■Shifting perceptions of socially just writing assessment: Labor-based contract grading and multilingual writing instruction, by Mikenna Leigh Sims, Article 100731.

■Resiliency and vulnerability in early grades writing performance during the COVID-19 pandemic, by Deborah K. Reed, Jing Ma, Hope K. Gerde, Article 100741.

■Exploring new insights into the role of cohesive devices in written academic genres, by Mahmoud Abdi Tabari, Mark D. Johnson, Article 100749.

■A move analysis of Chinese L2 student essays from the sociocognitive perspective: Genres, languages, and writing quality, by Tzu-Shan Chang, Article 100750.

■Assessing source use: Summary vs. reading-to-write argumentative essay, by Qin Xie, Article 100755.

■Peer-feedback of an occluded genre in the Spanish language classroom: A case study, by Ana Castaño Arques, Carmen López Ferrero, Article 100756.

■Use of lexical features in high-stakes tests: Evidence from the perspectives of complexity, accuracy and fluency, by Leyi Qian, Article 100758.

■Predicting EFL expository writing quality with measures of lexical richness, by Yang Yang, Ngee Thai Yap, Afida Mohamad Ali, Article 100762.

■Comparing computer-based and paper-based rating modes in an English writing test, by Yuhua Liu, Jianda Liu, Article 100771.

■The development of teacher feedback literacy in situ: EFL writing teachers’ endeavor to human-computer-AWE integral feedback innovation, by Peisha Wu, Shulin Yu, Yanqi Luo, Article 100739.

■The mediating role of curriculum configuration on teacher’s L2 writing assessment literacy and practices in embedded French writing, by Zhibin Shan, Hua Yang, Hao Xu, Article 100742.

■The development and validation of a scale on L2 writing teacher feedback literacy, by Icy Lee, Mehmet Karaca, Serhat Inan, Article 100743.

■Developing EFL teachers’ feedback literacy for research and publication purposes through intra- and inter-disciplinary collaborations: A multiple-case study, by Yaqiong Cui, Hui Jin, Yuan Gao, Article 100751.

■Feedback seeking by first-year Chinese international students: Understanding practices and challenges, by Jiming Zhou, Chris Deneen, Joanna Tai, Phillip Dawson, Article 100757.

■Are self-compassionate writers more feedback literate? Exploring undergraduates’ perceptions of feedback constructiveness, by Carlton J. Fong, Diane L. Schallert, Zachary H. Williamson, Shengjie Lin, Kyle M. Williams, Young Won Kim, Article 100761.

■Developing feedback literacy through dialogue-supported performances of multi-draft writing in a postgraduate class, by Peng Wu, Chunlin Lei, Article 100759.

■Classroom writing assessment and feedback practices: A new materialist encounter, by Kioumars Razavipour, Article 100760.

■Investigating the dimensions and determinants of children’s narrative writing in Korean, by Sarah Sok, Hye Won Shin, Article 100740.

■A non-Western adaptation of the Situated Academic Writing Self-Efficacy Scale (SAWSES), by Ceymi Doenyas, Zeynep Tunay Gül, Bülent Alcı, Article 100763.

■What skills are being assessed? Evaluating L2 Chinese essays written by hand and on a computer keyboard, by Jianling Liao, Article 100765.

■Assessing writing in fourth grade: Rhetorical specification effects on text quality, by Ilka Tabea Fladung, Sophie Gruhn, Veronika Österbauer, Jörg Jost, Article 100764.

■Chinese character matters!: An examination of linguistic accuracy in writing performances on the HSK test, by Xun Yan, Jiani Lin, Article 100767.

■Beyond literacy and competency – The effects of raters’ perceived uncertainty on assessment of writing, by Mari Honko, Reeta Neittaanmäki, Scott Jarvis, Ari Huhta, Article 100768.

■Using ChatGPT for second language writing: Pitfalls and potentials, by Jessie S. Barrot, Article 100745.

■Collaborating with ChatGPT in argumentative writing classrooms, by Yanfang Su, Yun Lin, Chun Lai, Article 100752.

■Specifications grading to promote student engagement, motivation and learning: Possibilities and cautions, by Brian C. Graves, Article 100754.

■Connecting form with function: Model texts for bilingual learners’ narrative writing, by Yingmin Wang, Manfei Xu, Yishi Jiang, Tan Jin, Article 100753.

■Using Peerceptiv to support AI-based online writing assessment across the disciplines, by Albert W. Li, Article 100746.


Volume 58

Automated analysis of cohesive features in L2 writing: Examining effects of task complexity and task repetition, by Mahmoud Abdi Tabari, Mark D. Johnson, Mahsa Farahanynia, Article 100783.

■The effects of task complexity and language aptitude on EFL learners’ writing performance, by Chun-yan Liu, Li-ting Sun, Yan He, Nian-zhe Wu, Article 100791.

■Profiling support in literacy development: Use of natural language processing to identify learning needs in higher education, by Patricio A. Pino Castillo, Christian Soto, Rodrigo A. Asún, Fernando Gutiérrez, Article 100787.

■Student engagement with peer feedback in L2 writing: Insights from reflective journaling and revising practices, by Zhe (Victor) Zhang, Ken Hyland, Article 100784.

■Developments in learners’ affective engagement with written peer feedback: The affordances of in situ translanguaging, by Hooman Saeli, Payam Rahmati, Article 100788.

■Growth mindset and emotions in tandem: Their effects on L2 writing performance based on writers’ proficiency levels, by Choo Mui Cheong, Yuan Yao, Jiahuan Zhang, Article 100785.

■Understanding EFL students’ feedback literacy development in academic writing: A longitudinal case study, by Fuhui Zhang, Hui-Tzu Min, Ping He, Sisi Chen, Shan Ren, Article 100770.

■Feedback literacy in writing research and teaching: Advancing L2 WCF research agendas, by Jill A. Boggs, Rosa M. Manchón, Article 100786.

■Assessing Korean writing ability through a scenario-based assessment approach, by Soo Hyoung Joo, Yuna Seong, Joowon Suh, Ji-Young Jung, James E. Purpura, Article 100766.

■Insights from lexical and syntactic analyses of a French for academic purposes assessment, by Randy Appel, Angel Arias, Beverly Baker, Guillaume Loignon, Article 100789.


摘要

Teaching second-grade students to write science expository text: Does a holistic or analytic rubric provide more meaningful results?

Angenette C. ImblerBrigham Young University

Sarah K. ClarkBrigham Young University

Terrell A. YoungBrigham Young University

Erika Feinauer, Brigham Young University

Abstract In this quasi-experimental study, the expository writing of students before and after they received integrated science and literacy instruction was compared using two different rubrics. Measures included a holistic rubric and an analytic rubric. Participants were 2nd grade students (N = 71) attending a Title I elementary school. First, a Wilcoxon signed-ranks test was used to examine student writing performance on both rubrics. All rubric elements showed statistically significant improvement except for three elements (topic introduction, concluding statement, and spelling). Next, a paired-samples t–test comparing the total scores from both rubrics showed statistically significant improvement from pre- to post-instruction. Finally, the ranks of scores for each rubric were examined to see how the scores varied based on the rubric used. The holistic rubric had fewer positive and negative ranks than the analytic rubric, while the holistic rubric had more tied ranks than the analytic rubric. Thus, the holistic rubric provided only an overall impression of student writing while the analytic rubric allowed the scorer to specify strengths and weaknesses. To support young writers, teachers should consider their purposes for scoring writing and use the rubric that will fulfill that purpose most appropriately.


Key words  Integrated instruction; Expository writing; Rubrics; Writing assessment; Analytic rubric; Holistic rubric


A multi-measure approach for lexical diversity in writing assessments: Considerations in measurement and timing

Kelly WoodsBrigham Young University

Brett HashimotoBrigham Young University

Earl K. Brown, Brigham Young University

Abstract Several researchers (e.g., Gebril & Plakans, 2016; Crossley & McNamara, 2012) have noted a relationship between lexical diversity (LD) and writing proficiency. Previous research has indicated that typical LD measures can account for as much as 44 % of variability in writing proficiency (Crossley et al., 2011). However, it remains unclear which of the numerous LD measures (or combination of measures) that exist best predict writing proficiency (e.g., McCarthy & Jarvis, 2010). The present study addresses these issues using scored writing assessment samples from 911 ESL students. LD was assessed with a battery of ten measures through the Python lexical-diversity package (Kyle, 2020). A PCA was conducted to determine how these LD measures varied in deriving LD scores. Writing proficiency scores were then predicted by using linear mixed-effects regressions that included either a single LD measure or multiple measures. The results indicated that LD is a meaningful predictor in timed writing proficiency assessments, multiple LD measures have more predictive power than individual measures, timing has a slight effect on LD, and some LD measures are highly correlated while others are not. Implications for automated writing proficiency assessment and rubric design as well as LD measurement research are explained.


Key words  L2 writing proficiency assessment; Lexical diversity measurement; Timed writing; English for academic purposes; English as a foreign language; Second language writing



A multidimensional approach to assessing the effects of task complexity on L2 students’ argumentative writing

Ting Sophia XuFaculty of Education & Social Work, University of Auckland

Lawrence Jun ZhangFaculty of Education & Social Work, University of Auckland

Janet S. Gaffney, Faculty of Education & Social Work, University of Auckland

Abstract The issue of how manipulating task-design factors may impact second language writing needs further exploration. In this study, task complexity was manipulated by changing the number of argument elements and reasoning demands within two argumentative writing tasks. This study firstly triangulated three sources of data (i.e., dual-task methodology, expert judgements, and post-task questionnaire) to offer separate evidence for validating the putative task-complexity manipulations. Results obtained by the three measures consistently supported the efficacy of the task-complexity manipulations; that is, the complex task version, as intended, was more cognitively demanding than the simple version. Next, 65 Chinese L2 learners were recruited to examine the effects of the verified task complexity on their writing performance. Learners’ writing was multidimensionally assessed in terms of syntactic complexity, lexical complexity, accuracy, fluency, and functional adequacy. The results showed no significant differences in syntactic complexity, accuracy, or fluency between the simple and complex tasks, although an increasing tendency occurred in the mean values of accuracy. Increasing task complexity led to a decrease in lexical complexity and functional adequacy, which supports the Limited Attentional Capacity Model that learners with a limited and single-resource attentional capacity prioritize their attention when limits are reached. The findings underline the importance of validating task complexity of the writing tasks for teaching and assessing writing and providing guidance for teachers and test designers on grading and sequencing tasks. Theoretical and methodological implications are also discussed.


Key words Task complexity; CAF; Functional adequacy; Dual-task methodology; L2 writing assessment


Comparing summative and dynamic assessments of L2 written argumentative discourse: Microgenetic validity evidence

Ali HadidiDepartment of Languages, Literatures, and Linguistics, York University

Abstract Drawing upon multiple data sources, this case study compares the summative assessment (SA) and dynamic assessment (DA) of L2 writing, in order to examine the validity of DA. There is a dearth of research on the validity of DA of L2 writing. Most DA studies to date have examined its effectiveness in assessing and promoting learning based on a loosely defined construct, which DA aims to assess and develop, and without discussing DA’s validity in relation to the microgenesis that is expected to transpire within DA. This study examines what an L2 learner knows of the constructs and can do with mediation to produce argumentative texts. The two constructs which were assessed and promoted were an adaptation of the Toulmin (1958/2003) model of argument and the knowledge-transforming cognitive composing process (Bereiter & Scardamalia, 1987) that led the production of the texts. After a period of instruction in argumentation and its cognitive processes, SA suggested that the learner’s development, as judged by his scores, was almost flat; thus, DA was performed to verify if, indeed, there was no true development and to simultaneously assess if, with mediation, the learner could produce the missing argumentative discourse features; a series of dialogic and contingent DA interactions were enacted to help the learner identify and produce the missing features; both goals were successfully accomplished. A microgenetic, together with rating and textual analysis, was employed to examine the quality and validity of mediation required to generate the discourse features. An argument-based approach to validity supports the claim that DA complemented SA to make the assessment practice fairer by tapping into emergent learning; the argument approach provides a robust basis for the interpretation and use of the findings of DA.


Key words Validity; Microgenetic; Dynamic assessment; Toulmin; Cognitive processes


Assessing the writing quality of English research articles based on absolute and relative measures of syntactic complexity

Yuxi Li Xi’an Jiaotong University, School of Foreign Studies

Ruiying Yang, Xi’an Jiaotong University, School of Foreign Studies

Abstract Prediction of writing quality based on various linguistic features has greatly supported writing assessment in academic contexts. Research article (RA) is an important academic genre widely taught and assessed in EAP (English for Academic Purposes) writing courses, but there is little research exploring the assessment of RA writing quality on the basis of linguistic features, which may have the potential to assist teachers’ evaluation of students’ RA drafts. The present study aims to explore whether features of syntactic complexity can effectively predict RA writing quality. To represent syntactic complexity as a multidimensional construct, we employed traditional absolute measures and the newly proposed relative measures related to the use of verb argument constructions (VACs). Robust predictors of writing quality were identified at both the whole RA level and the part-genre level. It is found that VAC-based measures are more useful for indexing the writing quality of RAs at whole text level, while absolute measures have a stronger predictive power at part-genre level. Our research also shows that the relationships between writing quality and syntactic complexity are subject to part-genre variation. Implications for language teaching and assessment in the EAP context are discussed.


Key words Syntactic complexity; Writing assessment; Research article; Academic writing; Linguistic features; Part-genre


Tasks and feedback: An exploration of students’ opportunity to develop adaptive expertise for analytic text-based writing

Lindsay Clare Matsumura, University of Pittsburgh, Learning Research and Development Center

Elaine Lin WangRAND Corporation

Richard Correnti, University of Pittsburgh, Learning Research and Development Center

Diane LitmanUniversity of Pittsburgh, Learning Research and Development Center

Abstract  In this study, we apply a cognitive theoretical lens to investigate students’ opportunity to develop their analytic text-based writing skills (N = 35 fifth and sixth grade classrooms). Specifically, we examine the thinking demands of classroom text-based writing tasks and teachers’ written feedback on associated student work. Four text-based writing tasks with drafts of associated student work were collected from teachers across a school year. Results of qualitative analyses showed that about half of the classroom text-based writing tasks considered by teachers to be challenging guided students to express analytic thinking about what they read (n = 73). A minority of student work received written feedback focused on students’ use of evidence, expression of thinking, and text comprehension; or received feedback that provided guidance for strategies students could take to meet genre goals. Most teachers provided content-related, instructive, and/or localized feedback on at least one piece of student work. Only a small number of teachers, however, consistently provided content-related, instructive or localized feedback on their students’ essays. Overall, results suggest that students have few opportunities to practice analytic text-based writing and receive feedback that would be expected to advance their conceptual understanding and adaptive expertise for writing in this genre.


Key words Feedback; Instruction; Tasks; Text-based writing


Visual thinking and argumentative writing: A social-cognitive pairing for student writing development

Ross C. Anderson, Oregon Research Institute, Creative Engagement Lab, LLC

Erin A. ChaparroUniversity of Oregon’s College of Education

Keith Smolkowski, Oregon Research Institute

Rachel CameronUniversity of Oregon’s College of Education

Abstract Though argumentative writing is a vital skill across diverse content areas and domains, most U.S. students perform below grade level in writing, and teachers are often unprepared to address this shortfall because their training approaches writing as a subspecialty of reading rather than its own unique discipline. Writing instruction and assessment need more approaches and broader perspectives to foster students’ motivation and engagement. To that end, the research team developed an innovative formative writing assessment exercise and scoring rubric focusing on analytic skills and the personal meaning-making process of argument writing rather than the technical skills of grammar, punctuation, and spelling. Integrating a visual literacy and arts component into the writing protocol as an alternative to a text-based prompt to enhance students’ engagement in writing. The scoring rubric design aimed to be generally applicable to a variety of different prompts, providing anchors alongside detailed criteria for each aspect of argumentative writing included. The team also surveyed students’ perceptions of different factors including self-efficacy for argumentation, self-efficacy for close observation, critical thinking, intrinsic enjoyment to write, openness to different perspectives, and sense of belonging in the class. The results emphasized the importance of students’ self-efficacy in argumentative writing and provide initial evidence that the proposed approach has promise.


Key words Argumentative writing; Writing assessment; Student agency; Social-cognitive theory; Visual literacy


The impacts of self-efficacy on undergraduate students’ perceived task value and task performance of L1 Chinese integrated writing: A mixed-method research

Yuan Yao,Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Xinhua Zhu, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Siyu ZhuDepartment of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Yue JiangSchool of Foreign Studies, Lingnan Normal University

Abstract While integrated writing (IW) has received extensive research attention, students’ self-efficacy beliefs in IW learning remain under explored, particularly in first-language (L1) IW instruction. With a sample of 239 first-year undergraduate students at a Chinese university, this study investigated students’ L1 Chinese IW self-efficacy beliefs, as well as their impacts on perceived task value and IW performance. Exploratory factor analyses identified five sub-dimensions of IW self-efficacy: ideation, conventions, source use, negative emotion control, and concentration. Notably, source use was a unique sub-dimension for IW self-efficacy. Negative emotion control and concentration were separated from the self-regulation construct in Bruning et al. (2013). Latent profile analysis categorized students into three groups based on their diverse levels of IW self-efficacy: moderate-, moderate-high-, and high-efficacious students. Students’ IW self-efficacy levels had a positive association with their perceived IW task value; however, the relationship between self-efficacy and IW performance was insignificant. Nine representative students, three from each group, were invited for the follow-up semi-structured interviews, and their responses provided complementary information for the quantitative analyses results. Pedagogical suggestions on L1 IW instruction were provided based on the findings.


Key words Integrated writing; Self-efficacy; Task value; Chinese-as-a-first-language


Examining the reliability of an international Chinese proficiency standardized writing assessment: Implications for assessment policy makers

Jinyan Huang, Jiangsu University

Danni ZhuJiangsu University

Duquan Xie, Shanghai University of International Business and Economics

Tiantian ShuJiangsu University

Abstract Using generalizability (G-) theory and rater think-aloud protocols (TAPs) as research methods, this study examined the effects of person, task, rater, and the interactions among these facets on the variability and reliability of the HSK-6 (i.e., an international Chinese proficiency standardized assessment) writing scores assigned by the national HSK writing raters as well as their scoring decision making processes. Sixty-four HSK-6 writing samples written by 32 CFL (Chinese as a foreign language) learners from 17 L1 (first language) backgrounds were scored holistically by ten experienced HSK writing raters using the authentic HSK-6 scoring rubric. They were then invited to produce a written retrospective TAP of their scoring decision making processes immediately after they had completed scoring each HSK-6 writing sample, which resulted in 64 protocols per rater. A total of 640 protocols were included in the qualitative data analysis. The G-theory results indicated that the current single-task and two-rater holistic scoring scheme would be unable to yield acceptable generalizability and dependability coefficients. The rater TAP results also revealed considerable rater variations in their scoring decision making processes. Important implications for the HSK-6 writing assessment policy makers in China are discussed.


Key words HSK (Hanyu Shuiping Kaoshi)− 6 writing assessment; Chinese-as-a-foreign-language (CFL) learners; Score reliability; Generalizability theory; Think-aloud protocols (TAPs)


Exploring the development of student feedback literacy in the second language writing classroom

Tiefu Zhang, School of Foreign Languages, University of Electronic Science and Technology of ChinaChengdu Bylingual

Zhicheng MaoFaculty of Education, The Chinese University of Hong Kong

Abstract Despite the growing recognition of the important role of student feedback literacy in exploiting feedback opportunities, little is known about the trajectory of student feedback literacy in L2 writing classrooms. To fill this gap, the current study investigated the development of student feedback literacy in an authentic L2 writing classroom where pedagogical approaches and feedback practices were structured towards enabling feedback opportunities. Data were gathered from multiple sources over a 16-week semester, including pre- and post-study student questionnaires, interviews with students and their writing teacher, students’ original and revised drafts, and classroom documents. The results revealed that through the teacher’s systematic approach that integrated preparatory activities (peer feedback training and assessment criteria elaborating), multi-source feedback practices (peer and teacher feedback), and post-feedback reinforcement (reflective activities), students experienced sustained opportunities and perceived an improvement in their feedback literacy over the semester. Specifically, the student participants reported enhanced capacities to elicit feedback, make judgments, and take actions, as well as strengthened dispositions to appreciate feedback and manage affect. The present study advances our knowledge of the nascent concept of feedback literacy and informs L2 pedagogy by presenting a developmental picture of L2 student writers’ feedback literacy.


Key words Feedback literacy; Written feedback; Assessment literacy; Classroom writing assessment; Second language writing


Experienced but detached from reality: Theorizing and operationalizing the relationship between experience and rater effects

Iasonas LamprianouDepartment of Social & Political Sciences, University of Cyprus

Dina TsagariDepartment of Primary and Secondary Teacher Education, Faculty of Education and International Studies, OsloMet - Oslo Metropolitan University

Nansia KyriakouDepartment of Education, Frederick University

Abstract It is often argued that, to achieve a high quality of rating in writing assessment, it is important to nurture a comprehensive measurement ecosystem involving experienced raters and appropriate rating scales. But what is ‘experience’, and how do different kinds of experience affect the severity and the reliability of rating? We collected data from a high-stakes English writing examination to investigate how 18 raters with temporally and qualitatively different experiences interpret and use a rating scale. We analyzed the data using mixed methods, including summative content analysis, Rasch models and Graph theory showing that qualitatively different experiences affect severity in different ways. Also, irrespective of past experience, raters who disengaged from their Community of Practice (CoP), were more likely to yield unreliable ratings. These results have important methodological implications in how researchers theorize and operationalize ‘rating experience’. Our findings highlight the importance of active and uninterrupted engagement in raters’ CoP with implications for both researchers and policy makers.


Key words Rating experience; Community of practice; Rater effect; Reliability; Severity



An investigation into L2 writing teacher beliefs and their possible sources

Mehmet Karaca, Independent Researcher, Research in Teacher Education and Material Development Group

Hacer Hande UysalCollege of Education Department of Foreign Languages Education, Hacettepe University

Abstract An extensive body of research in language education has recently focused heavily on teacher beliefs. Yet, L2 writing teacher beliefs and their possible sources remain an uncharted territory. Considering this critical gap, this study aims at investigating writing teacher beliefs regarding the nature of L2 writing, teaching L2 writing, and assessing L2 writing along with their possible sources. To do so, adopting a survey research design, the researchers recruited 321 participants to explore their beliefs about writing by using the Teachers’ English Writing Beliefs Inventory (TEWBI) developed by Karaca and Uysal (2021). In addition, with the help of the open question in the questionnaire, the sources of teachers’ writing beliefs were also explored. Descriptive statistics were employed to explore the strongest and the weakest writing beliefs for each sub-scale of the TEWBI. Then, upon the analysis of the open question, five possible sources of writing beliefs were revealed as past and current teaching experiences, writing experiences in L1 and/or L2, past learning experiences, materials and approaches, and professional development activities. In light of these findings, strong and weak belief statements are discussed, and research and pedagogical implications are provided.


Key words L2 writing; L2 writing teacher beliefs; Sources of writing beliefs


Your writing could have been better: Examining the effects of upward and downward counterfactual communication on the motivational aspects of L2 writing

Nourollah Zarrinabadi, Vahid Mahmoudi-Gahrouei, Alireza Mohammadzadeh Mohammadabadi

University of Isfahan

Abstract Teachers’ comments on how the students’ performance might have been if they performed in a specific way can have several implications for their motivation and engagement. This study examined the effects of upward and downward counterfactuals comments on the motivational aspects of L2 writing. To this end, 189 English as foreign language (EFL) learners were randomly assigned to three groups of upward counterfactual, downward counterfactual and control conditions and received counterfactual communication about a piece of writing. They were asked to answer to self-report scales on motivation, anxiety, growth mindsets, intended effort, willingness to write and perceptions of the rater. The results indicated that upward counterfactual communication positively influenced L2 writers’ motivation, anxiety, growth mindsets, intended effort, willingness to write and perceptions of the rater, while downward counterfactual communication produced negative effects in regards to these motivational variables. The implications of the study for research and practice are presented.


Key words Upward counterfactuals; Downward counterfactuals; Motivation; Anxiety; Mindsets


Chinese EFL Teachers’ Writing Assessment Feedback Literacy: A Scale Development and Validation Study

Yongliang Wang, School of Foreign Languages and Cultures, Nanjing Normal University

Ali Derakhshan, Department of English Language and Literature, Faculty of Humanities and Social Sciences, Golestan University

Ziwen Pan, School of Foreign Languages, Henan University

Farhad GhiasvandDepartment of English Language and Literature, Allameh Tabataba’i University

Abstract Teacher feedback plays a crucial role in various aspects of second/foreign language education. It can signpost teaching effectiveness and foster students’ learning. However, the role of feedback in assessing language skills and the literacy required to deliver feedback during assessment has remained a desideratum. As an important language skill, writing needs teacher literacy and effective feedback during instruction and assessment. Nonetheless, the literature lacks a valid scale to measure the writing assessment feedback literacy of EFL teachers. Moreover, the underlying components of this construct have yet remained unclear. Against these gaps, this study aimed to develop and validate a questionnaire on EFL teachers’ writing assessment feedback literacy in China. In so doing, 517 Chinese EFL teachers completed a newly designed scale. The results of factor analysis indicated four components and 32 items in the questionnaire. The extracted components were “assessment feedback competence”, “assessment feedback practices”, “knowledge of writing assessment feedback”, and “knowledge of useful techniques”. Moreover, the discriminant validity, composite reliability, and goodness of fit of the scale were statistically confirmed. Finally, the study presents some implications for EFL teachers, assessors, and teacher educators, who can increase their knowledge of writing assessment feedback literacy and its components.


Key words Chinese EFL teachers; Assessment feedback; Feedback literacy; Writing assessment feedback literacy


Feedback on writing through the lens of activity theory: An exploration of changes to peer-to-peer interactions

Nicholas CarrSchool of Education, Deakin University

Abstract Increased interest in collaborative writing has driven the emergence of a body of work investigating the collaborative processing of written correct feedback (WCF). However, the extant literature is yet to investigate how participant interactions change over multiple occasions of collaboratively processing WCF. This case study addresses this gap, employing Activity Theory to investigate how factors shaping the interactions between two English language learners evolved while collaboratively processing WCF on jointly produced texts on four occasions. Data were collected from video recordings of participants’ interactions as they collaboratively processed WCF, individual retrospective interviews, and observation of individual writing and speaking tasks. Data shows that the activity’s division of labour was both influential in shaping interactions and experienced significant change. This was primarily due to one participant’s language learning beliefs being revised as they adapted to the activity. The pedagogical implications of these findings are discussed.


Key words Written corrective feedback; Activity theory; Collaboration


The relationship between peer feedback features and revision sources mediated by feedback acceptance: The effect on undergraduate students’ writing performance

Qi Lu, College of Education, Zhejiang University

Yuan YaoDepartment of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Xinhua Zhu, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Abstract While research has extensively investigated the impact of peer feedback on students’ writing, the mechanism of peer feedback in shaping the sources of revision and performance outcome remains unclear. This study examined how peer feedback features (cognitive and affective) and feedback acceptance could impact revision sources (peer- and self-initiated), which in turn influence writing performance. Data were collected from 114 Hong Kong undergraduate students who enrolled on an academic writing course and engaged in peer feedback activities. Path analysis results show both cognitive and affective feedback features were positively associated with students’ peer-initiated revision, and affective feedback features were negatively associated with self-initiated revision. In this process, feedback acceptance positively mediated the effects of cognitive feedback features on peer-initiated revision and negatively mediated the effects of cognitive feedback features on self-initiated revision. Moreover, both peer- and self-initiated revision positively predicted writing performance. The findings suggest feedback should be provided gently to lessen potential negative impact on students’ writing revision. Students are encouraged to critically adopt peer feedback and make revisions at their own discretion.


Key words Peer feedback feature; Feedback acceptance; Peer-initiated writing revision; Self-initiated writing revision; Writing performance


Individual differences in L2 writing feedback-seeking behaviors: The predictive roles of various motivational constructs

Jian Xu, School of Business English, Sichuan International Studies University

Yabing WangCenter for Linguistics and Applied Linguistics / School of English Education, Guangdong University of Foreign Studies

Abstract This mixed-method study quantitatively examined the effects of growth mindsets, ideal and ought-to L2 writing selves, and academic buoyancy on feedback-seeking behaviors (FSB) through a questionnaire survey with undergraduate students in an L2 writing context. In addition, a semi-structured interview was conducted concurrently with four participants to explore their feedback-seeking experiences concerning English writing. The findings showed that growth mindsets, ideal L2 writing self, and academic buoyancy positively predicted FSB. The interviewed L2 learners used feedback monitoring extensively but used feedback inquiry depending on teacher’s feedback type. Consistent with the quantitative results, L2 learners were externally motivated to seek feedback, such as passing examinations, building rapport with course instructor, and being a qualified English major; they could survive the writing setbacks by manipulating and using multiple writing resources and believed that effort could make a difference compared to writing intelligence. More importantly, the qualitative findings uniquely revealed that the teacher’s indirect feedback, teacher’s assessment practices, and peer pressure also contributed to FSB. The implications for L2 writing instruction are discussed.


Key words Growth mindsets; Ideal and ought-to L2 writing selves; Academic buoyancy; Feedback seeking behaviors; English writing


Developing and evaluating a set of process and product-oriented classroom assessment rubrics for assessing digital multimodal collaborative writing in L2 classes

Anisa CheungCenter for Language Education, Hong Kong University of Science and Technology

Abstract Despite the growing interest in researching digital multimodal composing (DMC) in recent years, there were few attempts of tapping how assessments on DMC can best be devised to maximize students’ learning opportunities. To narrow this gap, this study proposed a set of product- and process-oriented classroom assessment rubrics that function as self-and peer-assessment tools for students when collaborating with each other to create DMC in online EAP contexts. During a four-week intervention with a veteran EAP educator, the rubric was tried out in her EAP classes as students worked in pairs to complete a DMC task. Its effectiveness was then evaluated based on the quality of student writings as well as their discussion, which was measured in terms of equality and mutuality. The product-oriented assessment rubrics was found to result in marked improvement in layout, navigation and rhetoric, whilst the process-oriented classroom assessment rubrics also enhanced both the equality and the mutuality of the collaborative process, as the dyads were inclined to establish a collaborative relationship during the task. These findings underscore the importance of using assessment rubrics as a formative assessment activity to help students harness the genre of DMC from different perspectives.


Key words Digital multimodal composing; Assessment rubric; Collaborative writing; Product-oriented assessment; Process-oriented assessment


Human scoring versus automated scoring for english learners in a statewide evidence-based writing assessment

Yen Vo, Heather Rickels, Catherine Welch, Stephen Dunbar,

Iowa Testing Programs, University of Iowa

Abstract This study examined the validity evidence of automated scores across English learners (ELs) and non-EL test takers in a statewide summative writing assessment. Writing performance for the two groups in online and paper-and-pencil formats was analyzed using operational data. Online essays were scored by an automated scoring engine and handwritten essays were scored by humans. Differential item functioning (DIF) analyses and propensity score matching (PSM) methods were conducted to examine any systematic difference between the two groups. DIF results indicated that ELs scored higher than expected on paper in Grades 10 and 11, as compared to non-ELs. In addition, DIF analyses comparing EL performance online versus paper demonstrated DIF favoring paper in upper grades. After controlling for student characteristics, differences still remained and the effect sizes were only reduced slightly. Follow-up analyses on the matched samples revealed similar levels of DIF as compared to the unmatched samples. These findings have important implications in interpreting writing scores for ELs. Recommendations for the modification of scoring and training protocols for human raters and automated scoring systems to better understand the relationship between human and machine scored essays are also provided.


Key words Writing assessment; Human scoring; Automated scoring; DIF; Propensity scores; ELs


The predictive powers of fine-grained syntactic complexity indices for letter writing proficiency and their relationship to pragmatic appropriateness

Yuan ke LiSchool of Foreign Studies, South China Normal University, Key Research Center of Guangdong Province for Language, Cognition and Assessment

Shiwan LinSchool of Foreign Studies, South China Normal University

Yarou LiuSchool of Foreign Studies, South China Normal University

Xiaofei LuDepartment of Applied Linguistics, The Pennsylvania State University

Abstract Drawing on 300 request letters and 300 self-recommendation letters composed by Chinese adolescent English as a foreign language (EFL) learners in the National Matriculation English Test, this study investigated the predictive powers of 10 fine-grained syntactic complexity (SC) indices related to dependent clause (DC) and complex noun phrase (CNP) usage for rated proficiency levels of these two types of letters. We further examined the connection between significant SC predictors for proficiency ratings and differential degrees of politeness and formality among these letters rated at different proficiency levels. The results revealed that in both types of letters, the clausal and noun phrase features associated with the fine-grained DC- and CNP-related indices predictive of the rated proficiency levels of these letters reflect the varied degrees of politeness and formality with which learners of different English writing proficiency used different syntactic patterns to perform requests and self-recommendations in letter writing. Our findings have important implications for understanding the functional values of SC for facilitating appropriate and effective communication in EFL learners’ letter writing.


Key words Adolescent EFL learners; L2 writing proficiency; Pragmatic appropriateness; Syntactic complexity

Linguistic complexity as the predictor of EFL independent and integrated writing quality

Yujie Zhang, Faculty of Education and Social Work, University of Auckland

Jinghui OuyangSchool of Foreign Languages, Tongji University

Abstract This study investigated the predictive power of syntactic, lexical, and phraseological complexity indices over human-assessed scores of independent non-prompt and integrated reading-to-write tasks of different genres. Their similarities and differences were then compared and discussed. To address the research questions, corpora of writing samples collected from intermediate EFL learners were built, rated, and analysed accordingly. The regression models indicated that 1) the predictability of syntactic complexity remained stable across task types and genres within the 20–30% range; 2) fine-grained syntactic indices, especially phrasal complexity and syntactic sophistication indices, played a stronger role in predicting scores; 3) the predictive power of lexical indices topped over the other two variables and was stronger in independent writing tasks; 4) the predictive power of phraseological indices was noticeably stronger in integrated writing tasks. Implications based on the findings were discussed for language teaching, learning and assessment.


Key words Integrated writing; Independent writing; Writing quality; Linguistic complexity; Phraseological complexity


Predicting Chinese EFL Learners’ Human‐rated Writing Quality in Argumentative Writing Through Multidimensional Computational Indices of Lexical Complexity

Yuxin PengDepartment of English, School of Foreign Languages, East China Normal University

Jie SunDepartment of English, School of Foreign Languages, East China Normal University, Department of English, Sun Yat-Sen University

Jianqiang QuanDepartment of English, School of Foreign Languages, East China Normal University, The Foreign Language Teaching and Research Center, School of Foreign Languages, East China Normal University

Yunqi WangDepartment of English, School of Foreign Languages, East China Normal University

Chunyang LvDepartment of English, School of Foreign Languages, East China Normal University

Haomin ZhangDepartment of English, School of Foreign Languages, East China Normal University, The Foreign Language Teaching and Research Center, School of Foreign Languages, East China Normal University

Abstract The current study examined the factor structure and construct validity of lexical complexity and also unpacked its role in predicting perceived writing quality based on one data set. This study sampled 220 argumentative writings scored by expert raters from the International Corpus Network of Asian Learners of English (ICNALE) and used Coh-Metrix to compute the indices that tap into multiple constructs of lexical complexity. The results of Confirmatory Factor Analysis (CFA) confirmed the hypothesized factor structure and construct validity of lexical sophistication and lexical diversity. The results of Structural Equation Modeling (SEM) demonstrated that lexical sophistication and lexical diversity measured by computational indices significantly predicted human assessment of L2 writing quality at two language proficiency levels. On the other hand, human judgment on L2 writing quality was found more concerned with more perceivable text features such as linguistic errors and text length than the fine-grained features that tap into lexical complexity. Interpretations as to the small albeit significant variance explained by lexical complexity are provided in the discussion section. This study provides empirical evidence for understanding human scoring processes and implications for fine-tuning AES models to enhance their interpretability.


Key words Lexical complexity; Lexical sophistication; Lexical diversity; Computational indices; Perceived L2 writing quality; Automated Essay Scoring (AES)


The design and cognitive validity verification of reading-to-write tasks in L2 Chinese writing assessment

Rujun Pan, nstitute of Educational Policy and Evaluation of International Students, Beijing Language and Culture University

Xiaofei LuDepartment of Applied Linguistics, The Pennsylvania State University

Abstract Reading-to-write (RTW) tasks have been commonly employed in second language (L2) English academic writing pedagogy, and many studies have investigated the validity and reliability of RTW tasks in L2 English writing assessment. Meanwhile, few studies have examined the cognitive validity of RTW tasks, and the design and validation of such tasks in L2 Chinese academic writing assessment remain underexplored. This study develops a Chinese RTW task following a set of design criteria and procedures and evaluates its cognitive validity as an instrument of L2 Chinese academic writing assessment. The RTW task was administered to 15 undergraduate and 15 postgraduate L2 Chinese learners in an eye-tracking laboratory. Analyses of the task features and the eye-tracking and stimulated recall interview data suggested that the RTW task largely aligned with the characteristics of authentic tasks in real L2 Chinese academic writing contexts and elicited a representative range of cognitive processes in existing models of RTW cognitive processes. Many of these processes manifested in different ways between the two groups of participants at different L2 Chinese proficiency levels. Our findings have useful implications for understanding the cognitive validity of the RTW task in L2 Chinese writing assessment.


Key words Reading-to-write; Chinese as second language; Cognitive processes; Cognitive validity


Developing teacher feedback literacy through self-study: Exploring written commentary in a critical language writing curriculum

Emma R. Britton, Language Resource Center, Cornell University

Abstract Responding to student writing is complex, as many choices teachers make about the content, delivery, and purpose of feedback often remain beneath the surface of awareness. Incongruencies between teachers’ feedback philosophies and practices are therefore common. Here, I analyze my own feedback practices, making a case for self-study as a method to promote teacher feedback literacy. Drawing upon classroom-data from a university developmental English writing context in the Northeastern US, I investigate my feedback practices alongside the implementation of a critical language awareness (CLA) curriculum. Deductively analyzing 301 comments provided to 27 students, I consider commentary’s alignment to CLA principles. From this data, I selected five students’ writing samples for further analysis, considering their writing development in interaction with CLA commentary. Results indicate some incongruencies between my philosophies and practices, as more than half of commentary showed weaker alignment to CLA principles, and it overwhelmingly addressed essay content (rather than organization or form). Commentary appeared to support the expansion of students’ academic writing practices, especially in description, figurative language, argumentation, codemeshing, and source integration. I conclude reflecting on challenges associated with theoretically informed feedback and discuss the study’s significance in expanding Lee’s (2021) framework for writing teacher feedback literacy.


Key words Feedback literacy; Critical language awareness; Writing development; Self-study; Mixed methods; Critical literacy


Pedagogical values of translingual practices in improving student feedback literacy in academic writing

Yachao Sun, Duke Kunshan University

Ge Lan, City University of Hong Kong

Li Zhang, Yushan Middle School

Abstract  This study examines an undergraduate student’s language practices and ideologies in her feedback process of academic writing and discusses the pedagogical values of translingual practices in improving student feedback literacy. A case study was conducted to explore how a student in a transnational English as a medium of instruction (EMI) context used translingual practices to seek and provide feedback and how she perceived her translingual practices for and in the feedback process. The findings show that translingual practices helped improve Jane’s feedback literacy by (1) increasing her awareness of the values of feedback as improvement and an active and reciprocal process, (2) empowering her in making critical judgments about various feedback, (3) developing her positive identities as multilingual writers to manage emotions for challenging feedback, and (4) helping her take action based on feedback received. This study enriches student feedback literacy research by arguing for the pedagogical values of translingual practices in enhancing student feedback literacy. Future research is needed to investigate how translingual practices in different contexts, with varying student writers, can be implemented for other written genres and purposes to improve multilingual students’ feedback literacy.


Key words Student feedback literacy; Translingual practices; Academic writing; Language ideology; Rhetorical situation


Constructing and validating a self-assessment scale for Chinese college English-major students’ feedback knowledge repertoire in EFL academic writing: Item response theory and factor analysis approaches

Jinyan Huang, Tiantian Shu, Yaxin Dong, Danni Zhu

Jiangsu University

Abstract Using item response theory (IRT) and exploratory and confirmatory factor analysis (EFA and CFA) approaches, this study constructed and validated a scale for Chinese college English-major students to self-assess their feedback knowledge repertoire in English-as-a-foreign-language (EFL) academic writing. It included two sub-studies: Study 1 – scale construction and Study 2 – scale validation. Study 1 was conducted within the IRT framework, which included the following six major steps: a) the preparation of item specifications following Yu and Liu’s (2021) conceptual framework; b) the preparation of the scale item pool; c) the field testing of scale items with Chinese college English-major students; d) the EFA analysis (to explore the factorial structure); e) the calculation of item function values; and f) the selection of items for the final scale. Study 2 was conducted within the IRT, EFA, and CFA frameworks, which included the following five major steps: a) the expert evaluation of the item appropriateness for construct interpretations of the final scale items (validity); b) the administration of the final scale to Chinese college English-major students; c) the EFA and CFA analyses (validity), d) the calculation of alpha coefficients (reliability), and e) the calculation of item, scale, and subscale information function values (reliability). The final 40-item scale was validated to have sound psychometric qualities for measuring Chinese English-major students’ feedback knowledge repertoire in EFL academic writing. Implications for EFL and English L1 researchers, and Chinese EFL teachers and their graduate students are discussed.


Key words EFL academic writing; Feedback knowledge repertoire; A self-assessment scale; Psychometric properties; Item response theory; Factor analysis


Exploring multilingual students’ feedback literacy in an asynchronous online writing course

Qianqian Zhang-Wu, Northeastern University

Abstract Contributing to the scarce empirical examination of multilingual student writers’ feedback literacy development in ESL contexts, this exploratory qualitative study drew upon five multilingual international students’ feedback interactions, their developing drafts and end-of-unit reflections to empirically examine and extend Yu et al.’s (2022) five-dimension feedback literacy model. Focusing on multilingual students’ experiences appreciating feedback, making judgements, managing affect, taking action, and acknowledging different feedback sources in an asynchronous online first-year undergraduate writing course during COVID-19, this study explored challenges and opportunities during participants’ feedback literacy development throughout a literacy narrative unit. Findings of the study shed light on growing and investigating multilingual writers’ feedback literacy development in online-instructed spaces and point out directions for future research.


Key words Feedback literacy; Multilingual international students; Second language writing; Peer review; Translingual writing; Writing assessment


Towards fostering Saudi EFL learners' collaborative engagement and feedback literacy in writing

Murad Abdu Saeed, English Department, Unaizah College of Sciences and Arts, Qassim University

Mohammed Abdullah Alharbi, English Department, College of Education, Majmaah University

Abstract  Learners may find it challenging to effectively engage with teacher feedback due to lacking peer support, especially in solitary engagement activities and learners’ inadequate feedback literacy. Therefore, the current study attempted to overcome these challenges by engaging 15 dyads of Saudi EFL learners in technology-mediated peer dialogue around teacher feedback at the review stage of collaborative writing of argumentative essays over one semester. The study analyzed dyads’ screencast records of immediate peer dialogue in the Blackboard Collaborate Ultra, argumentative essays in Google Docs and follow-up interviews. The collaborative approach enticed learners to cognitively, socio-affectively and behaviorally engage with teacher’s Google Docs-based feedback on their writing. Despite the dyads’ varying collaborative engagement according to the interaction of feedback manner and nature of errors and the interaction of cognitive processing and socio-affective relations, the learners’ reflection on the activities indicates that collaborative technology-mediated engagement sustained the opportunities for learners to appreciate feedback, make evaluative judgments and manage their negative effects. The study offers useful pedagogical implications for EFL writing instructors in promoting learners' engagement and feedback literacy with educational technology as a productive facilitator.

Key words Teacher feedback; Collaborative engagement; Feedback literacy


Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy

Ahmet Serdar Acar, Community College of Qatar

Abstract  As part of a larger case study, this single exploratory case study aims to explore the potential of genre-based pedagogy (GBP) to allow L2 writing instructors to enact their writing assessment literacy and feedback literacy. The findings demonstrate that GBP afforded the participating writing instructor of a genre-based EAP writing course to carry out effective writing classroom assessment practices and thus enact their22The gender-neutral pronouns ‘they/them/their’ were used to refer to the participating instructor in the study in order to protect the instructor’s identity. writing assessment literacy and feedback literacy. GBP allowed effective writing classroom assessment practices such as diagnostic assessment and learner involvement in assessment. More specifically, genre exploration tasks led to diagnostic assessment and helped the instructor coordinate effective classroom discussions to elicit evidence of the students’ knowledge of the target genre that they would study. Second, students’ production of texts in target genres not only allowed the instructor to collect evidence of the students’ specific genre knowledge, but it also afforded learner involvement through self-reflection. The instructor could also efficiently interpret this evidence and provide formative feedback through pre-established genre specific assessment criteria.


Key words Genre pedagogy; Classroom-based assessment of writing; Assessment literacy; Feedback literacy


Composition Organization and Development Analysis (CODA) Scale: Equipping high school students to evaluate argumentative essays

Natalia A. Bondar, Department of English, Clear Lake High School

Susan X Day, Department of Psychological, Health, and Learning Sciences, University of Houston

Lee Mountain, Department of Curriculum and Instruction, University of Houston

Laura B. Turchi, Department of English, Arizona State University

Laveria F. Hutchison, Department of Curriculum and Instruction, University of Houston

Abstract  Experimental and quasi-experimental studies of student writing have presented evidence for the positive effect of feedback and criteria-referenced assessment. Furthermore, scholars have demonstrated the value of analytic judgments in providing actionable feedback. This study brought together the elements of feedback, criteria-referenced assessment, and analytic judgments in an original instrument, the CODA Scale, designed to support high school students in the process of writing arguments. We sought to ascertain (1) whether the Composition Organization and Development Analysis (CODA) Scale is a valid and reliable formative-assessment instrument for argumentative writing and (2) whether it is an appropriate instrument for high school writers. We enlisted students and teachers to apply the scale to two anonymous essays, with two overall results. First, the scale’s good internal consistency and two interpretable factors in alignment with its design pointed to its validity and reliability. Second, the findings of no statistically significant difference in students’ and teachers’ scores for Essay 2, along with a statistically significant difference in the scores of less than one-third of the items applied to Essay 1, indicated that the scale is a suitable instrument for high school writers.

Key words Scale development; Argumentative writing; Formative assessment; Student-driven assessment; High school English; Writing instruction


Diagnosing Chinese college-level English as a Foreign Language (EFL) learners’ integrated writing capability: A Log-linear Cognitive Diagnostic Modeling (LCDM) study

Kwangmin Lee, Foreign Language and English as a Second Language Education, Teaching and Learning, The University of Iowa

Abstract While a large body of research has been accumulated that provides reliability and validity evidence for L2 integrated writing tasks, relatively little research has been conducted to examine integrated writing tasks as a means to provide diagnostic insights for teachers and learners. The current study aims to fill in this lacuna by applying a log-linear cognitive diagnostic model (LCDM) to reading-to-write integrated writing data collected from 315 Chinese college-level English as a Foreign Language (EFL) examinees. For this study, the integrated writing task was conceptualized as consisting of language use, source use, and content, with each of these unobservable attributes measured by surrogate indicators. Results showed that all the pairs of postulated attributes were positively correlated. However, the association between language use and content (r = 0.36) was not as strong as that of either language use and source use (r = 0.74) or source use and content (r = 0.90). Also, item parameters indicated that language use is more important than other attributes for obtaining a passing score for writing features. Lastly, the test-taker classification showed that it is impossible to master source use without other attributes, demonstrating the dependence of source use on other attributes. Implications for teaching are discussed.

Key words  Integrated writing task; Log-linear cognitive diagnostic modeling; Assessment, teaching and learning; Feedback


Assessing self-regulated writing strategies, self-efficacy, task complexity, and performance in English academic writing

Mark Feng Teng, Beijing Normal University

Ying Zhan, The Education University of Hong Kong

Abstract  The present study focused on the assessment of how task complexity and learner variables (English proficiency level, self-regulated writing strategies, and writing self-efficacy belief) influence English academic writing for students in a foreign language context. The participants were 270 students from a medium-sized university in China. All participants completed measures on self-regulated writing strategies, self-efficacy, and an academic writing test. Guided research questions aimed to explore the extent to which task complexity and English proficiency level influenced writing performance along with how learners’ self-efficacy and self-regulated writing strategies mediated the role of task complexity in academic writing performance. Structural equation modelling results showed that task complexity and English proficiency level influenced learners’ writing performance. Self-efficacy beliefs and the use of self-regulated writing strategies mediated the role of task complexity on academic writing performance. Implications related to the assessment of task complexity, self-regulated writing strategies, self-efficacy, and academic writing were discussed.


Key words Self-regulated writing strategies; Task complexity; Self-efficacy; Academic writing


Shifting perceptions of socially just writing assessment: Labor-based contract grading and multilingual writing instruction

Mikenna Leigh Sims, Department of English, California State University

Abstract  Labor-based contract grading has been identified as a more equitable and socially just method of assessing student writing. However, scholarship on labor-based contract grading has yet to investigate the use of this assessment method in sheltered multilingual contexts, nor has it considered multilingual writing instructors’ perceptions of this assessment practice. The present study presents cases of two instructors to learn more about how labor-based grading contracts are being used in sheltered multilingual first-year writing (FYW) courses, as well as how these instructors perceive labor-based contract grading. Data analysis reveals that both writing instructors use labor-based contract grading to create opportunity structures for their multilingual students; however, their experiences using labor-based grading contracts varied depending on the extent to which they designed their contracts to align with their pedagogical values. Further, the instructors both acknowledge the complexity of social justice in sheltered multilingual contexts but have different perspectives on how to best practice socially just writing assessment in their sheltered multilingual FYW courses.


Key words  Labor-based contract grading; Writing assessment; Multilingual writing; First-year writing


Resiliency and vulnerability in early grades writing performance during the COVID-19 pandemic

Deborah K. Reed, University of Tennessee

Jing Ma, University of Iowa

Hope K. Gerde, Texas A&M University

Abstract To explore potential pandemic-related learning gaps on expressive writing skills, predominantly Hispanic (≈50%) and White (≈30%) primary-grade students responded to grade-specific writing prompts in the fall semesters before and after school closures. Responses were evaluated with an analytic rubric consisting of five traits (focus, organization, development, grammar, mechanics), each scored on a 1–4 scale. Data first were analyzed descriptively and, after propensity score weighting, with ordinal response models (for analytic scores) and generalized linear mixed effects models (for composite scores). Compared to first graders in 2019 (n = 310), those in 2020 (n = 203) scored significantly lower overall as well as on all rubric criteria and were more likely to write unintelligible responses. Second graders in 2020 (n = 194) performed significantly lower than those in 2019 (n = 328) in some traits but not all, and there was a widening gap between students who did/not score proficiently. A three-level longitudinal model analyzing the sample of students moving from first to second grade in fall 2020 (n = 90) revealed significant improvements, but students still performed significantly lower than second graders in the previous year. Implications for student resiliency and instructional planning are discussed.


Key words  Written expression; First grade; Second grade; Analytic rubric; School closures


Exploring new insights into the role of cohesive devices in written academic genres

Mahmoud Abdi Tabari, Department of English, University of Nevada

Mark D. Johnson, Department of English, East Carolina University

Abstract This study examined the use of cohesive features in 270 narrative and argumentative essays produced by 45 s language (L2) students over a semester-long writing course. Multiple regression analyses were conducted to determine the ability of the computational indices of cohesion (TAACO) variables to predict human ratings of essay quality, recognize any differences in the use of cohesive devices between narrative and argumentative genres, and ascertain which of the cohesive devices varied for each of the genres over time. The results indicated clear differences in how cohesion was signaled between the two genres. Narrative texts relied on the use of connective devices to signal cohesion, whereas argumentative texts relied on the use of global-level repetition. With regard to development, the results were less conclusive but do suggest expansion in the participants’ use of cohesive devices. These results provide important implications for L2 writing pedagogy and assessment.

Key words Cohesion; Writing quality; Automated analysis; Writing development; Genre


A move analysis of Chinese L2 student essays from the sociocognitive perspective: Genres, languages, and writing quality

Tzu-Shan Chang, Department of English, Tamkang University

Abstract  Academic writing can be laborious for Chinese L2 undergraduates, not to mention that assessing their work is challenging and complex. While the linguistic features of Chinese undergraduate writing have been reidentified to be different in recent studies, experimental research on move analysis through the lens of contrastive rhetoric offers insights of Chinese students’ approaches to fulfilling the rhetorical requirements in writing. From the sociocognitive perspective, this study investigated Chinese students’ essays across genres and languages, focusing on moves/steps in introductions and their relation to essay quality. 264 essays by L2 novice writers were statistically analyzed. Results reveal that most participants’ writings gradually adopted elements of English academic writing, but they hybridly structured essays across genres and languages. Only 25.7% of the essays contained the three primary moves necessary for introductions and received the highest scores. Irrespective of language or genre, 80.3% stated essay’s purpose in introductions, but 19.7% did not. More than one-third of the participants began narrative essays with extensive descriptions of topic background, a feature of Chinese writing; such an opening strategy led to the essays not academically valued, especially when thesis and gap were not identified. The research findings hold implications for academic writing teaching and assessment.


Key words Move; Contrastive rhetoric; Sociocognitive; Academic writing in English; Chinese undergraduate; Genre; Writing quality


Assessing source use: Summary vs. reading-to-write argumentative essay

Qin Xie, The Education University of Hong Kong

Abstract What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of source language use was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.

Key words Source use, integrated reading-to-write; Multi-Facet Rasch Measurement; raters


Peer-feedback of an occluded genre in the Spanish language classroom: A case study

Ana Castaño Arques, Faculty of Medieval and Modern Languages, University of Oxford

Carmen López Ferrero, Department of Translation and Language Sciences, Universitat Pompeu Fabra

Abstract Learning how to write occluded genres is an elusive task (Swales, 1996) – even more so in the case of students writing in a second or additional language. To achieve discourse competence in the use of one of these genres, in this case the ‘statement of purpose’ typical of post-graduate programme admission forms, it is first necessary to fully understand its features at both the macrotextual and microlinguistic levels (Gillaerts, 2003; Bhatia, 2004). This qualitative study focuses on the writing of learners of Spanish as an additional language to analyse whether feedback provided by peers impacts the quality of the statements of purpose they write. Through a dual discourse analysis of their written work and in-class interactions during peer- feedback sessions, our study finds that, when properly trained and using tailored assessment tools, students can use peer-assessment profitably to improve the quality of their statements of purpose, as well as to acquire appropriate metalanguage to guide others. Our results thus reconfirm the beneficial effects of helping students to achieve feedback literacy.

Key words Peer-feedback; Occluded genres; Language learning; Writing; Spanish; Feedback literacy


Use of lexical features in high-stakes tests: Evidence from the perspectives of complexity, accuracy and fluency

Leyi Qian, School of Foreign Studies, Hefei University of Technology

Abstract  Use of lexical features in academic writing settings has sparked much interest in second language studies. However, prior research has dealt with productive lexicon primarily from complexity-oriented measures exhibited in low-stakes testing. From a CAF-based (complexity, accuracy and fluency) analysis approach, this study is aimed to explore lexical features in high-stakes tests that were extracted from a large corpus containing TOEFL iBT independent writing samples. To this end, a pool of 727 and 275 sampled essays for Group Medium and High, respectively, and a total of 16 specific lexical items were selected to measure how learners’ writing performance varied across writing proficiency. The results indicated that though in general, learners showed significant differences between group means, the correlations between the presence of lexical devices and writing quality were found to be negligible in that only two indices (hyponymy, MTLD) were revealed to correlate significantly with human-rated writing quality. Implications for second language writing were also discussed in this study.


Key words Complexity; Accuracy; Fluency; Lexical features; High-stakes tests


Predicting EFL expository writing quality with measures of lexical richness

Yang Yang, Ngee Thai Yap, Afida Mohamad Ali

Department of English Language, Faculty of Modern Languages and Communication, Universiti Putra Malaysia

Abstract This paper investigates the relationship between lexical richness and EFL expository writing quality and examines the predictability of lexical richness indices to EFL expository writing quality. Two hundred and seventy expository writing samples were drawn from Spoken and Written English Corpus of Chinese Learners Version 2.0. The lexical richness of the writing samples was analyzed with Lexical Complexity Analyzer, and the values of the 26 indices were calculated being the independent variables to predict the EFL expository writing quality. Besides, the writing samples were rated by three experienced raters and the average scores from the three raters were used as the dependent variable. The results of correlation analysis show that all four measures of lexical richness, i.e., lexical density, sophistication, variation, and fluency, are significantly correlated with the EFL expository writing quality, but the strength of the correlation is either low or medium. The results of regression analysis show that two indices of lexical richness, i.e., Number of Words and Noun Variation, can explain 38.5% (r = 0.620, p = 0.000) of the variance in the average score of EFL expository writing. A 10-fold cross-validation was performed and the results indicate that the model validly fits the data and can be generalized with unseen data.


Key words Lexical richness; EFL writing; Expository writing; Writing quality


Comparing computer-based and paper-based rating modes in an English writing test

Yuhua Liu, Foreign Languages College, Jiangxi Normal University, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies

Jianda Liu, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies

Abstract The study utilized a mixed methods approach to compare the scoring of raters in assessing writing performance across three modes: paper-based, on-screen marking of scanned images, and online word-processed versions. six experienced raters evaluated the performances of 39 test-takers in each mode. The many-facet Rasch model was employed to analyze scoring differences among the rating modes; the semi-structured interview was used to collect raters' perceptions towards performance under the three modes. The findings indicated that the difficulty level was ranked in ascending order of on-screen marking of scanned images, paper-based text, and online word-processed text. Bias analysis revealed interactions between the rater and the mode, as well as between the criterion and the mode. Verbal reports from the raters highlighted four construct-irrelevant factors that could potentially influence scoring under the three modes: convenience for essay overview and word recognition, potential underestimation of word count, and raters' preference for essays in handwriting. Based on the results, recommendations were provided for rater training and essay scoring across different modes.


Key words  EFL Writing; Presentation mode; Rasch analysis; Bias


The development of teacher feedback literacy in situ: EFL writing teachers’ endeavor to human-computer-AWE integral feedback innovation

Peisha WuCollege of Liberal Arts, Shantou University, Faculty of Education, University of Macau

Shulin Yu, Faculty of Education, University of Macau

Yanqi Luo, College of Liberal Arts, Shantou University, Faculty of Humanities, The Hong Kong Polytechnic University

Abstract While recent years have witnessed increasing theoretical and empirical elaboration on the construct of teacher feedback literacy in higher education and second language education, little research has investigated the development of teacher feedback literacy, especially when teachers collaborate in an attempt to improve feedback strategies with technology. To fill this gap, the present study examined two L2 writing teachers taking the initiative to create, update, and implement a human-computer-automatic writing evaluation (AWE) integral feedback platform, and how such a feedback innovation process impacted their feedback literacy development. The analysis of multiple sources of data, including semi-structured interviews, stimulated recalls, class observation, and artifacts, revealed that the two teachers approached the innovation by orchestrating mediating tools, interacting dialogically with social agents, reflecting critically, and crossing boundaries. Through this process, the development of teacher feedback literacy occurred at varying rates across different aspects. Specifically, positive changes were effected in the teachers’ feedback thinking as well as feedback giving and sharing practices. However, the teachers’ feedback literacy in classroom practice did not seem to have generated as salient a positive outcome. Possible reasons are discussed regarding the scope of the feedback innovation and contextual constraints, and implications are offered. The study underscored L2 writing teacher feedback literacy as a developmental phenomenon molded by situated social practice.


Key words Teacher feedback literacy; Second language writing; Teacher feedback; Computer-mediated feedback; Writing assessment


The mediating role of curriculum configuration on teacher’s L2 writing assessment literacy and practices in embedded French writing

Zhibin Shan, Faculty of French and Francophone Studies, Beijing Foreign Studies University, aboratory STIH (Sens, Texte, Informatique, Histoire), Faculty of Arts and Humanities, Sorbonne University

Hua Yang, School of English for Specific Purposes, Beijing Foreign Studies University

Hao Xu, National Research Centre for Foreign Language Education, Beijing Foreign Studies University

Abstract This qualitative study explores the influence of curriculum configuration as an instructional context on teachers’ L2 writing assessment literacy and practices. Specifically, the study examined how a group of university French language teachers in China drew on their assessment literacy to assess student’s embedded writing situated in an integrated language course for beginning French learners. Findings show that the curriculum configuration that prioritised student’s language knowledge acquisition over writing development seemed to cause the teachers to adjust their writing assessment to reconcile student’s acquisition of language knowledge and development of writing skills. Whilst teachers’ assessment literacy did not seem to be affected, their assessment practices showed a sequenced “split” between assessments on language issues and writing issues, which were separated and then ordered according to their perceived importance. Teachers’ beliefs about learners’ general learning needs, rather than those for learning L2 writing, seemed to determine how teachers navigated the sequence of the split assessments to order their priorities.


Key words Writing assessment literacy; Writing assessment practices; Embedded writing; Instructional context; French language teacher

The development and validation of a scale on L2 writing teacher feedback literacy

Icy Lee, National Institute of Education, Nanyang Technological University

Mehmet Karaca, Independent Researcher, Research in Teacher Education and Material Development (R-TEAM) Group

Serhat Inan, Division of Curriculum and Instruction, Hacettepe University, Research in Teacher Education and Material Development (R-TEAM) Group

Abstract Feedback literate teachers play a central role in promoting students’ writing performance. In L2 writing, however, there is a paucity of research on teacher feedback literacy, and instruments that investigate L2 writing teacher feedback literacy are virtually non-existent. Heeding the call for research on scale development to measure teacher feedback literacy, this two-phase study is an attempt to develop and validate a feedback literacy scale (FLS) for teachers to illuminate this budding concept in L2 writing. The factor structure of the 34-item FLS was determined through exploratory factor analysis (EFA) with the participation of 223 writing teachers. The results revealed a three-factor solution, namely Perceived Knowledge, Values, and Perceived Skills. A confirmatory factor analysis (CFA) was employed which aimed to verify the structure of the scale and its three sub-scales, based on a sample of another 208 writing teachers. It was found that the model fits the data well (e.g., the RMSEA with 0.052 (90 % CI=0.045–0.059)), proving that the FLS yields psychometrically reliable and valid results, and it is a robust scale to measure the self-reported feedback literacy of L2 writing teachers. In light of these findings, the factor structure and sub-scales of the FLS are discussed. Practical implications for teachers, teacher trainers and teacher education programs, as well as implications for feedback literacy research are provided.


Key words Feedback; Feedback literacy; L2 writing; Writing teachers; Scale development


Developing EFL teachers’ feedback literacy for research and publication purposes through intra- and inter-disciplinary collaborations: A multiple-case study

Yaqiong Cui, Hui Jin, Yuan Gao

Department of Foreign Languages, The University of Chinese Academy of Sciences

Abstract The growing pressure for publication in prestigious English-medium journals and scholars’ increasing need to participate international research communities have given rise to the initiatives of English for Academic Purposes (EAP) courses or English for Research and Publication Purposes (ERPP) feedback service. Providing such ERPP feedback to students’ research papers brings forward the necessity for EFL teachers, who often give the feedback, to develop ERPP feedback literacy. Given the paucity of research on this topic, the purpose of this study is thus to explore how EFL teachers could develop their ERPP feedback literacy through feedback provision practices. Drawing upon teacher-provided written feedback collected from four EFL teachers over five years (i.e., 2017–2022), semi-structured interviews, and teachers’ verbalizations while giving feedback and stimulated recalls, this multiple-case study shows that through intra- and inter-disciplinary collaborations when giving ERPP feedback, EFL teachers perceived an overall improvement in their ERPP feedback literacy. In particular, they moved beyond targeting feedback to surface-level linguistic errors and developed awareness in disciplinary knowledge construction in ERPP writing. We argue that the intra- and inter-disciplinary collaborations can help develop EFL teachers’ ERPP feedback literacy, and hence strengthen their EAP teacher identity.

Key words EFL teachers; ERPP; Academic writing centers; Feedback literacy development


Feedback seeking by first-year Chinese international students: Understanding practices and challenges

Jiming Zhou, College of Foreign Languages and Literature, Fudan University

Chris Deneen, Education Futures, University of South Australia

Joanna Tai, Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University

Phillip Dawson, Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University

Abstract Feedback seeking is an emerging focus in higher education and writing assessment research. Feedback may be sought directly from others or through observing cues in learning contexts. Existing research suggests feedback seeking may be essential to student feedback literacy, but few studies explore this issue in relation to specific high-need populations. This study examined Chinese international students’ practices and perceptions of feedback seeking in writing assessments during their first years at an Australian university. Thirty-seven participants from three faculties participated in 14 individual interviews and seven focus groups. Data were analysed using thematic analysis protocols. Encountered challenges were grouped into epistemological, ontological, and practical dimensions. Findings indicate that students drew upon various sources for inquiry and monitoring at the stages of planning and preparation, revising before submission, and reacting after getting results. Implications include the importance of creating low-threshold opportunities for facilitating learners’ feedback seeking. Findings also suggest that educators’ enhanced understanding of students' educational identity and epistemological beliefs may improve students’ feedback-seeking practices.


Key words Feedback seeking; Writing assessment; Chinese international students; Student feedback literacy


Are self-compassionate writers more feedback literate? Exploring undergraduates’ perceptions of feedback constructiveness

Carlton J. Fong, Texas State University

Diane L. Schallert, The University of Texas at Austin

Zachary H. Williamson, The University of Texas at Austin

Shengjie Lin, Yale University

Kyle M. Williams, Travis Early College High School

Young Won Kim, University of Washington

Abstract Upon receiving constructive feedback, students may experience unpleasant emotions from critical comments about their writing or the realization that their work is unfinished. Few studies have focused on how learners are able to manage such emotions, one aspect of feedback literacy. Regulating these emotions may involve practicing self-kindness and avoiding self-judgment, two subcomponents of self-compassion. Self-compassionate individuals may move past any feelings of failure and direct their attention to what needs improvement. The question addressed was whether undergraduates’ level of self-compassion would affect their perceptions of the constructiveness of researcher-created feedback statements. At a U.S. southwest university, students (N = 508) rated the constructiveness of 56 statements that had been created to represent different levels of constructiveness in feedback to a fictitious writing assignment. Results indicated that students’ self-kindness positively predicted feedback constructiveness, whereas self-judgment was a negative predictor. Additionally, students higher in self-compassion (high in self-kindness in one analysis and those low in self-judgment in a second) rated the least constructive statements as more constructive than did students low in self-compassion. We end with implications for feedback literacy and writing assessment research and for application of self-compassion in the context of feedback on writing.

Key words Feedback literacy; Self-compassion; College writing; Constructive feedback; Self-kindness; Self-judgment


Developing feedback literacy through dialogue-supported performances of multi-draft writing in a postgraduate class

Peng Wu, Chunlin Lei

Shanghai University of International Business and Economics

Abstract  Although a wealth of studies has been conducted to explore how to develop student feedback literacy, the development of postgraduates’ feedback literacy is under-explored. To bridge the gap, this study aims to investigate the characteristics of developing feedback literacy among postgraduates in an academic writing class. Based on the theoretical framework proposed by Yu and Liu (2021), the study designed two iterations and drew data from a survey, students’ multi-draft writing, peer feedback, students’ reflective journals, and discussion scenarios in dialogues. It was found students developed a dynamic view of feedback, enriched strategies to solve cognitive conflicts, and became emotionally resilient to feedback at the end of iteration two. The study also manifested dialogues helped to develop student feedback literacy and solve cognitive conflicts by clarifying misconceptions, formulating revising actions and co-constructing new ideas on writing and assessment. This study has pedagogical implications on designing feedback processes to facilitate student feedback literacy in academic writing classes.

Key words Feedback literacy; Cognitive conflicts; Dialogues; Peer feedback; Postgraduates


Classroom writing assessment and feedback practices: A new materialist encounter

Kioumars Razavipour, Department of English Language and Literature, Shahid Chamran University of Ahvaz

Abstract The mainstream approach to teacher assessment literacy seems to be founded on a (post)positivist paradigm leading to an autonomous model of literacy comprised of generic knowledge and skills. This paradigm obscures the non-cognitive, embodied, and affective dimensions of assessment practices. In this conceptual inquiry, I use the New Materialist philosophy to make sense of writing assessment literacy and feedback practices. In New Materialisms, the materiality of everything is emphasized, ontology is flat, reality is becoming, agency is relational, knowledge is entangled practice, and language is a resource in communicative assemblage. Using the noted conceptual tools, I try to provide a materialized conceptualization of writing assessment and feedback practice arguing that from a New Materialist perspective, feedback practices are an assemblage of rhetoric, IELTS, institution, materiality, art, cross-lingual resources, social relations, affect, and embodiment.


Key words  Writing assessment literacy; New materialisms; Assemblage; Feedback practices; Flat ontology


Investigating the dimensions and determinants of children’s narrative writing in Korean

Sarah Sok, Program in Global Languages and Communication, University of California

Hye Won Shin, Department of English Language Education, Korea University

Abstract This study investigated the dimensionality and determinants of Korean-speaking sixth-grade students’ narrative writing. The sample included 113 Korean children in one elementary school, aged 12.48 years. The Story Composition component of the Test of Written Language (Hammill & Larsen, 2009) was employed to evaluate writing quality and the Writing Assessment Measure (Dunsmuir et al., 2015) rubric was used to assess seven component skills of narrative writing: handwriting, spelling, punctuation, sentence structure and grammar, vocabulary, organization and overall structure, and ideas. Results of a confirmatory factor analysis supported both a one-factor and a three-factor model of narrative writing among Korean students. Further, hierarchical linear modeling analysis revealed that better handwriting and sentence structure and grammar skills significantly predicted higher writing quality. Gender also uniquely contributed to students’ narrative writing quality, with females performing better than males. Overall, the effects on narrative writing quality were largest for gender, followed by handwriting, and then by sentence structure and grammar.


Key words  Korean narrative writing; Dimensionality; Writing quality; Handwriting; Grammar; Gender


A non-Western adaptation of the Situated Academic Writing Self-Efficacy Scale (SAWSES)

Ceymi Doenyas, Yıldız Technical University, Department of Education, Psychological Counseling and Guidance

Zeynep Tunay Gül, Yıldız Technical University, Department of Education, Educational Sciences, Curriculum and Instruction

Bülent AlcıYıldız Technical University, Department of Education, Educational Sciences, Curriculum and Instruction

Abstract A key parameter for academic and professional success is the academic writing skill, which not only involves following rules and codes but also engaging cognitive skills to satisfy complex task demands. Academic writing is challenging for most students. However, this critical skill has not been in research focus until recently, as students are inherently expected to possess it. The present study carries this research focus on the academic writing skill to a non-Western population in which strong academic competition and high stakes make this skill relevant for psychological well-being. The Situated Academic Writing Self-Efficacy Scale (Mitchell et al., 2021) was adapted to Turkish and its validity and reliability was tested in a sample of Turkish undergraduate students (n = 300, 65% females). CFA results confirmed the scale’s original structure consisting of 3 dimensions (Writing Essentials, Relational-Reflective Writing, and Creative Identity) and 16 items. Cronbach's alpha of .895 revealed high internal consistency. The Turkish version of the scale, which is also its first non-Western adaptation, emerged as a reliable tool to measure situated (contextual) academic writing self-efficacy. This scale can help researchers, practitioners, teachers, and students alike determine students’ perceived self-efficacy in academic writing and how this perception can be strengthened based on their total and sub-dimension scores. Thereby, this scale holds the potential to enhance this quality in low-scoring individuals, with possible benefits for their academic and professional success and psychological well-being, as well as guiding research into academic writing self-efficacy and motivation in native and foreign languages and in the world's current digital environment.


Key words Academic writing; Scale adaptation; Self-efficacy; University students; Writing efficacy


What skills are being assessed? Evaluating L2 Chinese essays written by hand and on a computer keyboard

Jianling Liao, School of International Letters & Cultures, Arizona State University

Abstract As writing on computers has become increasingly common in L2 assessment and learning activities, it is crucial to understand the mediation effects induced by the computer on writing performance and to compare them with those of handwriting. This is especially important for L2 Chinese learning, given that handwriting characters has been claimed to play an essential role in the development of Chinese literacy. The current study extends the scope of writing modality investigation by examining the linguistic, metadiscourse, and organizational properties of handwritten and typed essays by L2 Chinese learners. Furthermore, predictors of holistic ratings of writing quality were identified in the two modes to understand whether the focal points of raters’ evaluations may differ between the two mediums. The results yielded moderate to strong evidence about how the two modalities allow for distinct affordances, interact differently with the L2 (i.e., Chinese), and consequently affect writing performance in various dimensions.


Key words Handwriting; Typed writing; Keyboarding; L2 writing; L2 Chinese; Writing modality


Assessing writing in fourth grade: Rhetorical specification effects on text quality

Ilka Tabea Fladung, University of Cologne, Department of German Language and Literature II (IDSL II)

Sophie Gruhn, University of Cologne, Department of German Language and Literature II (IDSL II)

Veronika Österbauer, Federal Institute for Quality Assurance of the Austrian School System (IQS), Salzburg (Institution Subordinate to the Austrian Federal Ministry of Education, Science and Research)

Jörg Jost, University of Cologne, Department of German Language and Literature II (IDSL II)

Abstract In writing instruction, specifying writing assignments in terms of purpose, audience, and medium is considered good practice. Earlier studies that found positive effects of such rhetorical specification were usually conducted with older participants. The benefits of rhetorical specification for novice writers are not yet clear, especially in the context of assessing writing. Thus, this study examined the effects of rhetorical specification on text quality of descriptions in an assessment prompt for fourth graders. Austrian fourth graders were assessed with the same paper-pencil-based L1-writing prompt but were randomly assigned within classrooms to one of three different conditions: high-level rhetorical specification (n = 78), medium-level rhetorical specification (n = 44), or no rhetorical specification (n = 44). The texts written by participants were rated holistically and analytically. The analysis revealed no differences between texts written by students under these three different conditions of rhetorical specification levels except for one single analytic indicator of text quality. Texts written in response to medium-level rhetorical specification scored higher on the rating of the criterion Adaptation to the audience than texts written under the other two conditions. The pros and cons of (high-level) rhetorical specification and good assessment practice with novice writers are being discussed in the findings.


Key words Writing prompts; Rhetorical specification; Task specification; Profilierung; L1 German; Assessment; Primary Education


Chinese character matters!: An examination of linguistic accuracy in writing performances on the HSK test

Xun Yan, Department of Linguistics, University of Illinois at Urbana-Champaign Beckman Institute for Advanced Science and Technology

Jiani Lin, Department of East Asian Languages and Cultures, University of Illinois Urbana-Champaign

Abstract The orthographic and morphological system of Mandarin Chinese requires more time and developmental stages for learners to acquire. This source of difficulty might present unique challenges and opportunities for writing assessment for Chinese as a Second Language (CSL). This study employed a corpus-based approach to examine the accuracy features of 10,750 essays written by test-takers from 17 first language (L1) backgrounds on the HSK test. Based on both orthographic types and economic-geopolitical factors, we classified test-taker L1s into 3 groups. We first factor-analyzed a comprehensive array of error types to identify the underlying dimensions of Chinese writing accuracy. Then, dimension scores were included in regression models to predict HSK writing scores for different L1 groups. The results revealed five dimensions related to syntactic, morphological, and lexical errors. Among them, dimensions on character and word-level errors were stronger predictors of HSK scores, although the discrimination power was stronger for test-takers from L1s that are orthographically dissimilar and economic-geopolitically distant from Mandarin Chinese. These findings suggest that Chinese morphology (i.e., the acquisition of characters and how characters form words) constitutes a unique source of difficulty for L2 learners. We argue that morphological elements should be an important subconstruct in Chinese writing assessments. (200 words)

Key words Linguistic accuracy; HSK; Languages other than English; Less commonly taught languages


Beyond literacy and competency – The effects of raters’ perceived uncertainty on assessment of writing

Mari Honko, Centre for Applied Language Studies, University of Jyväskylä

Reeta Neittaanmäki, Centre for Applied Language Studies, University of Jyväskylä

Scott Jarvis, Northern Arizona University

Ari Huhta, Centre for Applied Language Studies, University of Jyväskylä

Abstract This study investigated how common raters’ experiences of uncertainty in high-stakes testing are before, during, and after the rating of writing performances, what these feelings of uncertainty are, and what reasons might underlie such feelings. We also examined if uncertainty was related to raters’ rating experience or to the quality of their ratings. The data were gathered from the writing raters (n = 23) in the Finnish National Certificates of Proficiency, a standardized Finnish high-stakes language examination. The data comprise 12,118 ratings as well as raters’ survey responses and notes during rating sessions. The responses were analyzed by using thematic content analysis and the ratings by descriptive statistics and Many-Facets Rasch analyses. The results show that uncertainty is variable and individual, and that even highly experienced raters can feel unsure about (some of) their ratings. However, uncertainty was not related to rating quality (consistency or severity/leniency). Nor did uncertainty diminish with growing experience. Uncertainty during actual ratings was typically associated with the characteristics of the rated performances but also with other, more general and rater-related or situational factors. Other reasons external to the rating session were also identified for uncertainty, such as those related to the raters themselves. An analysis of the double-rated performances shows that although similar performance-related reasons seemed to cause uncertainty for different raters, their uncertainty was largely associated with different test-takers’ performances. While uncertainty can be seen as a natural part of holistic ratings in high-stakes tests, the study shows that even if uncertainty is not associated with the quality of ratings, we should constantly seek ways to address uncertainty in language testing, for example by developing rating scales and rater training. This may make raters’ work easier and less burdensome.


Key words Rater behavior; Uncertainty; Confidence; Rating quality; Language assessment


Using ChatGPT for second language writing: Pitfalls and potentials

Jessie S. Barrot, College of Education, Arts and Sciences, National University

Abstract Recent advances in artificial intelligence have given rise to the use of chatbots as a viable tool for language learning. One such tool is ChatGPT, which engages users in natural and human-like interactive experiences. While ChatGPT has the potential to be an effective tutor and source of language input, some academics have expressed concerns about its impact on writing pedagogy and academic integrity. Thus, this tech review aims to explore the potential benefits and challenges of using ChatGPT for second language (L2) writing. This review concludes with some recommendations for L2 writing classroom practices.

Key words ChatGPT; Chatbots; Immersive technology; L2 writing; Computer-assisted language learning


Collaborating with ChatGPT in argumentative writing classrooms

Yanfang Su, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University

Yun Lin, College of Chinese Language and Culture, Jinan University

Chun Lai, Faculty of Education, The University of Hong Kong

Abstract Composing high-quality argumentative writing requires consideration of the dialogical, structural, and linguistic aspects of argumentation. However, finding an ideal peer student to practice argumentation skills can be challenging, and providing feedback on lower-level language concerns and complex issues related to content and organization can be time-consuming for teachers. In this article, we suggest the integration of ChatGPT, a chatbot released by OpenAI in 2022, into argumentative writing classrooms as a promising solution. The article explores the possibilities of ChatGPT in assisting students with tasks such as outline preparation, content revision, proofreading, and post-writing reflection. We also acknowledge the limitations of ChatGPT and offer suggestions for future considerations in teaching and research.


Key words Argumentative writing; ChatGPT; Chatbots


Specifications grading to promote student engagement, motivation and learning: Possibilities and cautions

Brian C. GravesUniversity of North Carolina Asheville, One University Heights

straints, and future possibilities of specifications (or “specs”) grading, in conversation with related research into its classroom use across several disciplines, albeit mostly outside of writing instruction. Although specs grading may promote the same (and problematic) normative standards of language, labor, and performance associated with traditional approaches, elements of this system may also be leveraged in creative and flexible ways that invite students into critical and collaborative engagement with assessment itself as an integral part of writing, and of learning, even and especially in the context of AI writing tools like OpenAI’s ChatGPT.

Key words Assessment; Specifications grading; Contract grading; AI writing; Feedback; Motivation


Connecting form with function: Model texts for bilingual learners’ narrative writing

Yingmin Wang, School of Foreign Studies, South China Normal University

Manfei Xu, School of Foreign Studies, South China Normal University

Yishi Jiang, School of Foreign Languages, Sun Yat-sen University

Tan Jin, School of Foreign Studies, South China Normal University

Abstract The linguistic influence of first language (L1), finding the appropriate words or expressions in the target language, and the lack of genre awareness are the major challenges for second language (L2) writers. For many English as a Foreign Language (EFL) students in secondary educational settings, narrative writing is an important genre in the English education curriculum. In this paper, we introduce an Expert Narrative Texts Corpus that provides EFL learners whose L1 is Chinese with model texts to address the above challenges in their narrative writing. Three key features of the tool are identified, namely, cross-linguistic comparisons for bilingual writers, model texts by expert writers, and learning to search. Taking these three key features into consideration, this paper also explores the pedagogical values of the corpus, through which bilingual writers can draw upon their L1 language resources, explore appropriate expressions within authentic example sentences, and search for both lexical items and rhetorical patterns.


Key words Bilingual writers; Model texts; Expert writing; Corpus search


Using Peerceptiv to support AI-based online writing assessment across the disciplines

Albert W. Li, University of California

Abstract Peerceptiv is a peer assessment tool developed by learning sciences researchers to help students demonstrate disciplinary knowledge through writing feedback practices. This review of Peerceptiv describes its key features while comparing it with other writing feedback tools and suggesting possibilities and limitations of using it to support AI-based online writing assessment across the disciplines. Future considerations regarding the use of Peerceptiv in assessing, teaching, and researching online writing are discussed.


Key words Peerceptiv; Feedback; Assessment tool; Online writing assessment; AI-based writing; Writing across the disciplines


Automated analysis of cohesive features in L2 writing: Examining effects of task complexity and task repetition

Mahmoud Abdi Tabari, Department of English, University of Nevada

Mark D. Johnson, Department of English, East Carolina University

Mahsa Farahanynia, Department of English Language and Literature, Allameh Tabatabai University

Abstract Informed by task-based approaches to language teaching, recent L2 writing research has sought to determine the effect of task complexity features on the complexity, accuracy, and fluency of written L2 production (Johnson, 2017). However, two areas of task-informed research have received scant attention: a) the effect of task complexity features on L2 writers’ use of cohesive devices and b) the effect of task repetition as a form of implicit planning. Furthermore, interaction effects of task complexity and task repetition on different types of cohesive devices in L2 writing have not been explored. To bridge these gaps, this study examines the effects of resource-directing task complexity features (Robinson, 2005), task repetition (Lambert et al., 2017), and their interaction on L2 writers’ use of cohesive devices. Ninety-six participants composed two argumentative essays in counterbalanced order: a) a simple task and b) a complex task and then completed a task difficulty questionnaire. After an interval of one week, the participants repeated each task. Essays were then analyzed using the Tool for Automatic Analysis of Cohesion—or TAACO (Crossley et al., 2018)—for indices found to be predictors of human ratings of essay organization (Abdi Tabari & Johnson, 2023). A factorial repeated-measures MANOVA revealed limited effects of task repetition on the participants’ use of cohesive devices. Rather, task complexity features had a more robust effect on their use of textual and local cohesive devices.


Key words Automated analysis; Cohesive devices; L2 writing; Task-related variables; TBLT


The effects of task complexity and language aptitude on EFL learners’ writing performance

Chun-yan Liu, Foreign Languages College, Jiangxi Normal University

Li-ting Sun, Jiayuguan No.1 Middle School

Yan He, Foreign Languages College, Jiangxi Normal University, Office of Academic Studies, Xiamen Shuangshi Middle School

Nian-zhe Wu, Education College, Jiangxi Normal University

Abstract The present study investigated the effects of task complexity and language aptitude on upper-intermediate EFL learners’ argumentative writing performance in terms of syntactic complexity, lexical complexity, accuracy and fluency. The findings of this study demonstrated that increasing task complexity manipulated by the number of elements and the degree of reasoning along the resource-directing dimension leads to enhancement of syntactic complexity and lexical diversity, and there are low correlations between language aptitude (mainly number learning and spelling clues) and writing performance. What’s more, task complexity and language aptitude (and its components) are predictors for writing performance in terms of intraclausal level of syntactic complexity, lexical diversity and fluency. These findings lend partial support to the Cognition hypothesis and Aptitude Complexes Hypothesis in L2 writing. Theoretical, methodological, and pedagogical implications of the study for task design and implementation as well as for task-based assessment in language education programs are discussed.


Key words  Task complexity; Language aptitude; Writing performance


Profiling support in literacy development: Use of natural language processing to identify learning needs in higher education

Patricio A. Pino Castillo, Universidad de Concepción, Facultad de Humanidades y Arte, Departamento de Español, Vicerrectoría Académica de Investigación y Postgrado, Universidad Santo Tomás

Christian Soto, Universidad de Concepción, Facultad de Humanidades y Arte, Departamento de Español

Rodrigo A. Asún, Facultad de Ciencias Sociales, Universidad de Chile

Fernando Gutiérrez, Universidad de Concepción, Facultad de Humanidades y Arte, Departamento de Español

Abstract  Reading and writing are core activities in higher education, by means of which students learn to participate in specialized discourses. Although there is consensus on the conceptualization of reading comprehension, its measurement, and development, the same is not true for written expression. Writing complexity has been found to improve with schooling, but there are ample differences between literacy practices at school and at the university that require extra attention in diagnosing students’ compositions. The present study set out to test a natural language processing tool to build domain profiles of writing complexity in first-year university students at a private university. The processing of texts resulted in 49 indices which, after exploratory factor analysis and theoretical discussion, gave rise to 4 dimensions of complexity explaining 52.3% of variance: lexical richness, syntactic complexity, informative text structure and specialized language use. Significant differences were found between more and less skilled writers in the aggregated scores, lexical richness, and syntactic complexity. Interestingly, novice and expert writers did not differ significantly in more over-arching aspects of writing. We discuss how this technology can help identify students’ needs in more superficial aspects of writing complexity that have been shown to improve by means of different strategies.


Key words Writing complexity; Writing assessment; Natural language processing; Cognitive models; Automated assessment


Student engagement with peer feedback in L2 writing: Insights from reflective journaling and revising practices

Zhe (Victor) Zhang, City University of Macau

Ken Hyland, University of East Anglia

Abstract The torrent of research into peer feedback in academic writing in recent years has largely overlooked student revision process – how individual students engage with this feedback to revise their texts and why certain changes are made in their texts. In other words, the cognitive dimension of student engagement with peer feedback in revision is little known. Drawing on multiple student drafts, peer feedback on these drafts, reflective journals and interviews with students, this study examines how two L2 students engage with peer feedback to conduct revisions. We found that the two participants differed considerably in their revision processes and identified two patterns of engagement: deep engagement, characterized by self-regulated revising practices, and surface engagement, concerned with other-regulated revision operations. We were not only interested in students’ revision operations in their drafts, but also their reflective practices in their reflective journals that their teacher had assigned. The study suggests that effective student engagement with peer feedback largely depends on how they make use of cognitive and metacognitive strategies in the revision process. The study recommends that teachers seek to provide instructional scaffolding to facilitate student cognitive engagement with peer feedback on L2 writing.


Key words Student engagement; Peer feedback; Student revisions; L2 writing


Developments in learners’ affective engagement with written peer feedback: The affordances of in situ translanguaging

Hooman Saeli, The University of Tennessee

Payam Rahmati, Oklahoma State University

Abstract The literature on peer feedback has yet to investigate the effects of translanguaging on learners’ affective engagement with feedback. Therefore, this case study 1) explored the affordances of translanguaging between two English learners in Iran, and 2) investigated any developments in these learners’ affective engagement with peer feedback mediated by translanguaging. The analysis of eight samples of student writing with peer feedback, stimulated recalls, and reflection journals showed that translanguaging mediated the learners’ affective engagement with peer feedback. Using their first language, Persian, helped improve their affective engagement with peer feedback on content-related issues. As shown by a sociocultural theory-motivated analysis, the learners’ affective engagement with peer feedback was nonlinear, dynamic, and evolving over the course of six weeks. Indeed, the participants deemed Persian as an effective tool in providing meta-feedback on content-related, and at times, grammar issues, thus leading to more positive affective engagement with peer feedback. We think that teachers should consider allowing their students to use their first language during the process of peer feedback to enhance their affective engagement with peer feedback.


Key words Written feedback; Peer feedback; Affective engagement with feedback; Sociocultural theory


Growth mindset and emotions in tandem: Their effects on L2 writing performance based on writers’ proficiency levels

Choo Mui Cheong, Faculty of Education, The University of Hong Kong

Yuan Yao, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, College of Foreign Languages, Wulingshan K-12 Educational Research Center, Huaihua University

Jiahuan Zhang, Faculty of Education, The University of Hong Kong

Abstract A growth mindset (GM), defined as an individual’s perception that their intellectual ability is malleable, has been the subject of extensive research attention, as it can facilitate learning in many contexts. GM has been found to have more pronounced positive effects on students with lower-level writing proficiency. Emotions have also been found to play a significant role in second language (L2) writing. We conducted an innovative investigation of the relationships between GM, emotions related to writing (enjoyment and anxiety), and writing performance. The results of our study involving 589 Chinese 12th-graders and L2 writing tasks showed that GM was positively associated with enjoyment and negatively associated with anxiety. When assessing students grouped according to their writing performance (high, middle, and low), we found an indirect positive path from GM to writing performance via anxiety in the middle-level group and via enjoyment in the low-level group. The findings suggest that GM can promote enjoyment and mitigate anxiety, therefore facilitating L2 writing performance. The pedagogical implications are that teachers should encourage students to develop a GM and foster their social–emotional learning.


Key words EFL writing. growth mindset; Writing enjoyment; Writing anxiety; Writing proficiency


Understanding EFL students’ feedback literacy development in academic writing: A longitudinal case study

Fuhui Zhang, School of Foreign Languages, Northeast Normal University

Hui-Tzu Min, Department of Foreign Languages and Literature, National Cheng Kung University

Ping He, Jinpu New Area Tonghe Middle School

Sisi Chen, School of Foreign Languages, Northeast Normal University

Shan Ren, School of Foreign Languages, Northeast Normal University

Abstract Although many studies have elaborated the nature and importance of feedback literacy, a longitudinal approach to delving into the development of EFL students’ feedback literacy is scarce. Informed by an evidence-based framework of feedback literacy improvement in the context of academic writing (Yu & Liu, 2021), the study explored the development of feedback literacy of three first-year students who studied English as a Foreign Language (EFL) in understanding, evaluating, reflecting on, and using peer feedback and their emotional responses during a semester-long online writing program. By analyzing students’ voiced peer feedback, written drafts, immediate reflections after peer feedback, monthly reflection journals, end-of-term stimulated recalls and interviews, we found positive changes in their cognitive, behavioral, and emotional capacities in response to peer feedback, and increasing improvement in students’ assessment criteria knowledge through self-reflections after peer feedback, incorporation of peer feedback and reflections into revision. However, students did not develop at the same pace due to personal factors. Variations in students’ reflections and proactive actions were found in relation to the development of feedback literacy evidenced in their revision improvement. Implications for pedagogical facilitation of peer feedback literacy and use of peer feedback in revision in academic writing are discussed.


Key words Feedback literacy via peer feedback; Self-reflections after peer feedback; Emotional response to peer feedback; EFL writing; Academic writing


Feedback literacy in writing research and teaching: Advancing L2 WCF research agendas

Jill A. Boggs, Keir Hardie Bldg College of Arts & Humanities, Swansea University

Rosa M. Manchón, University of Murcia

Abstract  Research on corrective feedback (CF) has developed from its original focus on identifying which type of CF is most effective for developing L2 language learners’ grammatical accuracy to focusing on how learners use CF. Underpinning this is the assumption that learners know what to do with CF when they receive it. The concept of “feedback literacy” challenges this assumption. Carless and Boud (2018), define feedback literacy as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (p. 1316). Our intention in this paper is to reflect on the manner in which theoretical and empirical work on feedback literacy can contribute to advancing L2 written corrective feedback (WCF) research agendas. Central in our proposal is the partially under-researched aspect of experience in terms of the L2 writers’ educational background experience, particularly experience with L1 and L2 writing. We further argue that how learners were taught L1 writing and how the L1 educational culture/ society values writing can impact on how learners approach L2 writing tasks and accompanying feedback. Implications of this inclusive view of the learner for future research and pedagogy is discussed.


Key words Feedback literacy; Written corrective feedback; L2 writing; Second language acquisition; Individual differences; Writing experience

Assessing Korean writing ability through a scenario-based assessment approach

Soo Hyoung Joo, Teachers College, Columbia University

Yuna Seong, Teachers College, Columbia University

Joowon Suh, Columbia University

Ji-Young Jung, Columbia University

James E. Purpura, Teachers College, Columbia University

Abstract Advocating for measuring a broadened construct of writing ability, scenario-based assessment (SBA) has recently been used to assess writing ability. While SBA for writing has been explored in ESL contexts (e.g., Banerjee, 2019; Purpura, 2021), it has been rarely used or examined in the context of languages other than English, especially those that are typologically distant from English, such as Korean. This study explores the feasibility of designing a scenario-based Korean writing assessment (K-SBA). A pilot study was conducted with 51 participants from a Korean as a foreign language program at a US university. Through a goal-oriented scenario of a study abroad program in South Korea, examinees were presentedz with a collaborative problem-solving task where they were expected to learn about two potential class trip destinations and write a summary of the pros and cons of each destination based on what they learned. The results indicated that the K-SBA was psychometrically sound, providing adequate evidence of reliability. It elicited construct-relevant performances reflecting features unique to the Korean language, such as sociolinguistic competence through the use of honorifics or rhetorical control through a range of cohesive devices. Additionally, variations in examinee performances were observed across the different course levels.


Key words Scenario-based assessment; Learning-oriented assessment; Situated language proficiency; Korean language assessment


Insights from lexical and syntactic analyses of a French for academic purposes assessment

Randy Appel, Waseda University, Global Education Center

Angel Arias, Carleton University, School of Linguistics and Language Studies

Beverly Baker, University of Ottawa, Official Languages and Bilingualism Institute

Guillaume Loignon, Université du Québec à Montréal, Education and pedagogy

Abstract With the objective of improving writing assessment of language instruction, we examine the lexical and syntactic features in two corpora of high and low scoring French texts of the Test du Certificat de Compétence en Langue Seconde (Second Language Certification Test; TCCLS) at the University of Ottawa (uOttawa). We first situate the test in its local context, demonstrating how our research objectives are born from specific needs to improve student outcomes. We then describe our creation of two corpora of high and low performing test takers, followed by lexical bundle (LB) analyses (Phase 1) and further linguistic complexity analyses with a French-language tool (Phase 2). Results indicate that high level writers used more LBs and borrowed more text from the prompt than low level writers. In addition, specific elements of linguistic complexity were identified, suggesting high level writers produced texts that were lexically richer and more syntactically advanced. We discuss the importance of these findings in improving our writing instruction, as well as the challenges of adapting tools and approaches traditionally associated with English to French.

Key words L2 French; Academic writing; Lexical bundles; Automated analyses; Lexical complexity


期刊简介

Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional ('direct' and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development; it aims to value all perspectives on writing assessment as process, product and politics (test takers and raters; test developers and agencies; educational administrations; and political motivations). The journal is interested in review essays of key issues in the theory and practice of writing assessment.


《写作评估》作为国际期刊,为书面语言评估提供思想、研究和实践的论坛。评估写作发表有关各种写作评估的文章,书评,会议报告和学术交流,包括传统的(“直接”和标准化形式的)写作测试,替代绩效评估(如作品集),工作场所抽样和课堂评估。该期刊侧重于写作评估过程的所有阶段,包括需求评估,评估创建,实施和验证以及测试开发;本刊旨在重视将写作评估的所有观点视为过程,产品和政治(应试者和评分者;考试开发人员和机构;教育管理;和政治动机)。本刊对写作评估理论和实践中关键问题的评论论文感兴趣。


官网地址:

https://www.journals.elsevier.com/assessing-writing

本文来源:ASSESSING WRITING官网

点击文末“阅读原文”可跳转官网




  




刊讯|SSCI 期刊《英语语言教学》2023年第1-4期

2024-05-13

刊讯|SSCI 期刊《性别与语言》2023年第1-4期

2024-05-05

刊讯|SSCI 期刊《语言与跨文化交际》 2023年第1-6期

2024-04-29

刊讯|SSCI 期刊《社会语言学》2023年第4-5期

2024-04-25

刊讯|SSCI 期刊《中国语言学报》2023年第3期

2024-04-24

刊讯|SSCI 期刊《语言学年鉴》2024年第10卷

2024-04-16

刊讯|SSCI 期刊《心智与语言》2023年第4-5期

2024-04-10

刊讯|SSCI 期刊 《国际多语主义杂志》2023年第1-4期

2024-04-08

刊讯|SSCI 期刊《语言教学研究》2023年第5-6期

2024-04-04

刊讯|SSCI 期刊《社会中的语言》2023年第4-5期

2024-03-31


欢迎加入

“语言学心得交流分享群”“语言学考博/考研/保研交流群”


请添加“心得君”入群务必备注“学校/单位+研究方向/专业”

今日小编:东东咚

  审     核:心得小蔓

转载&合作请联系

"心得君"

微信:xindejun_yyxxd

点击“阅读原文”可跳转下载

继续滑动看下一个
语言学心得
向上滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存