AUTOMATED ESSAY SCORING VERSUS HUMAN SCORING: A RELIABILITY CHECK

Dublin Core

Title

AUTOMATED ESSAY SCORING VERSUS HUMAN SCORING: A RELIABILITY CHECK

Author

DOĞAN, Ali

Abstract

New materials have continuously been added to the assessment instruments in ELT day by day. The question of whether writing assessment in ELT can be done via E-Rater® was first addressed in 1996, and this system, which is commonly called “Automated Essay Scoring Systems” in especially America and Europe in recent years, has taken part in the field of assessment instruments of ELT with steady development. The purpose of this study is to find out whether AES can supersede the writing assessment system that is used at The School of Foreign Languages at Zirve University. It is performed at The School of Foreign Languages at Zirve University. The participants of the study were a group of 50 students in level C that is the equivalent of B1. The beginning of the quantitative study includes the assessment of essays written by C level students at The School of Foreign Languages at Zirve University by three human raters and E-Rater®. After the study it was found that the writing assessment has been currently used at The School of Foreign Languages at Zirve University costs more energy, more time and it is more expensive. Thus, AES was suggested for use at The School of Foreign Languages at Zirve University, which has proven to be more practicable.

Keywords

Conference or Workshop Item
PeerReviewed

Date

2014

Extent

3375