AUTOMATED ESSAY SCORING VERSUS HUMAN SCORING: A RELIABILITY CHECK

DOĞAN, Ali (2014) AUTOMATED ESSAY SCORING VERSUS HUMAN SCORING: A RELIABILITY CHECK. In: Foreign Language Teaching and Applied Linguistics, May, Sarajevo.

Full text not available from this repository.
Official URL: http://fltal.ibu.edu.ba/

Abstract

New materials have continuously been added to the assessment instruments in ELT day by day. The question of whether writing assessment in ELT can be done via E-Rater® was first addressed in 1996, and this system, which is commonly called “Automated Essay Scoring Systems” in especially America and Europe in recent years, has taken part in the field of assessment instruments of ELT with steady development. The purpose of this study is to find out whether AES can supersede the writing assessment system that is used at The School of Foreign Languages at Zirve University. It is performed at The School of Foreign Languages at Zirve University. The participants of the study were a group of 50 students in level C that is the equivalent of B1. The beginning of the quantitative study includes the assessment of essays written by C level students at The School of Foreign Languages at Zirve University by three human raters and E-Rater®. After the study it was found that the writing assessment has been currently used at The School of Foreign Languages at Zirve University costs more energy, more time and it is more expensive. Thus, AES was suggested for use at The School of Foreign Languages at Zirve University, which has proven to be more practicable.

Item Type: Conference or Workshop Item (Paper)
Subjects: P Language and Literature > PE English
Divisions: J-FLTAL
Depositing User: Mrs. Emina Mekic
Date Deposited: 22 Nov 2016 11:50
Last Modified: 22 Nov 2016 11:50
URI: http://eprints.ibu.edu.ba/id/eprint/3375

Actions (login required)

View Item View Item