With the advancement of artificial intelligence and natural language processing technologies, the automated writing evaluation system (AWE) has gradually been integrated into English as a Second Language (ESL) writing instruction and has gained potential value for enhancing feedback efficiency and supporting writing revision. However, there are still some disputes over their instructional effectiveness and the boundaries of their application. This study analyzes the pedagogical value and limitations of AWE in ESL writing instruction through reviews of empirical studies, systematic reviews and meta-analyses published in the past 20 years. The findings show that AWE has a relatively stable positive effect on improving surface-level linguistic features such as grammar, spelling, and punctuation, and can sustain several rounds of modification and to a certain degree, improve learners' motivation and autonomy in writing. However, it is still unknown whether it is valid and stable enough to assess higher-order writing abilities such as writing content development, argumentation logic and appropriateness. This study argues that AWE should be viewed as a supplementary tool to provide formative assessments, functioning effectively in instructional models and human–AI collaboration. The findings of this study may provide pedagogical implications for the rational integration of AWE tools into ESL writing classrooms, offering directions for future research in this field.
Research Article
Open Access