Extraction of Evaluative Elements for Cross-prompt Automated Essay Scoring

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

Abstract

Automated Essay Scoring (AES) systems attempt to automatically evaluate student-written essays with machine learning models. Existing AES trials are mostly designed prompt-specifically with supervised learning, which has limited their applicability in real-life scenarios. We extract evaluative elements from the source set of essays as axes in the vector space, applying dimensionality reduction by Principal Component Analysis (PCA). We then transfer them to a different target set of essays for score prediction. Simplified cross-prompt binary clustering task of dividing high/low-scored groups shows an acceptable level of accuracy.
Original languageEnglish
Title of host publicationForum on Information Technology 2022, FIT 2022
Pages267-270
Publication statusPublished - 2022
Externally publishedYes
EventForum on Information Technology 2022, FIT 2022 - Yokohama, Japan
Duration: 13 Sept 202215 Sept 2022

Conference

ConferenceForum on Information Technology 2022, FIT 2022
Abbreviated titleFIT 2022
Country/TerritoryJapan
CityYokohama
Period13/09/2215/09/22

Fingerprint

Dive into the research topics of 'Extraction of Evaluative Elements for Cross-prompt Automated Essay Scoring'. Together they form a unique fingerprint.

Cite this