The Effects of Explanations in Automated Essay Scoring Systems on Student Trust and Motivation

Research output: Contribution to journalArticleAcademicpeer-review

21 Citations (Scopus)
353 Downloads (Pure)

Abstract

Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of AI algorithms. However, carefully designed user studies on how to design explanations for AI in education are still limited. The current study aimed to identify the effect of explanations of an automated essay scoring system on students’ trust and motivation. The explanations were designed using a needs-elicitation study with students in combination with guidelines and frameworks of explainable AI. Two types of explanations were tested: full-text global explanations and an accuracy statement. The results showed that both explanations did not have an effect on student trust or motivation compared to no explanations. Interestingly, the grade provided by the system, and especially the difference between the student’s self-estimated grade and the system grade, showed a large influence. Hence, it is important to consider the effects of the outcome of the system (here: grade) when considering the effect of explanations of AI in education.
Original languageEnglish
Pages (from-to)37-53
Number of pages17
JournalJournal of Learning Analytics
Volume10
Issue number1
DOIs
Publication statusPublished - 12 Mar 2023

Keywords

  • XAI
  • automated essay-scoring
  • trust
  • motivation
  • human-computer interaction

Fingerprint

Dive into the research topics of 'The Effects of Explanations in Automated Essay Scoring Systems on Student Trust and Motivation'. Together they form a unique fingerprint.

Cite this