Abstract
Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of AI algorithms. However, carefully designed user studies on how to design explanations for AI in education are still limited. The current study aimed to identify the effect of explanations of an automated essay scoring system on students’ trust and motivation. The explanations were designed using a needs-elicitation study with students in combination with guidelines and frameworks of explainable AI. Two types of explanations were tested: full-text global explanations and an accuracy statement. The results showed that both explanations did not have an effect on student trust or motivation compared to no explanations. Interestingly, the grade provided by the system, and especially the difference between the student’s self-estimated grade and the system grade, showed a large influence. Hence, it is important to consider the effects of the outcome of the system (here: grade) when considering the effect of explanations of AI in education.
Original language | English |
---|---|
Pages (from-to) | 37-53 |
Number of pages | 17 |
Journal | Journal of Learning Analytics |
Volume | 10 |
Issue number | 1 |
DOIs | |
Publication status | Published - 12 Mar 2023 |
Keywords
- XAI
- automated essay-scoring
- trust
- motivation
- human-computer interaction