Boundary conditions of generalizing predictive models for academic performance: within cohort versus within course

Sonja Kleter (Corresponding author), Uwe Matzat, Rianne Conijn

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Much of learning analytics research has focused on factors influencing model generalizability of predictive models for academic performance. The degree of model generalizability across courses may depend on aspects, such as the similarity of the course setup, course material, the student cohort, or the teacher. Which of these contextual factors affect generalizability and to what extent is yet unclear. The current study explicitly compares model generalizability within course versus within cohort of predictive models. This study considered 66 behavioral indicators, which are commonly used in the literature. Indicators regarding frequency and duration of online study time, accessing study material, time management, assignments and quizzes, and weekly measures, were extracted from the university's learning management system. Numerical and binary predictive models were generated via recursive feature selection. Model generalizability was evaluated in terms of both model stability and model performance. The results showed that model stability was better for numerical models generalized within course compared to models generalized within cohort or across course and across cohort. Nevertheless, model stability was low for the binary models and only moderate for numerical models under all the conditions. Concerning model performance, the increase in estimation error after model generalizability depends on the initial model performance for models generalized within course and within cohort. Contrary to previous research, with respect to performance, we found no difference between model generalizability within cohort and within course. We suspect that performance reduction after any form of model generalizability depends on initial performance.

Original languageEnglish
Article number10634816
Pages (from-to)2183-2194
Number of pages12
JournalIEEE Transactions on Learning Technologies
Volume17
DOIs
Publication statusPublished - 30 Oct 2024

Keywords

  • Academic performance
  • Context modeling
  • Data models
  • Learning Analytics
  • LMS
  • model generalizability
  • Numerical models
  • Numerical stability
  • performance prediction
  • Predictive models
  • Springs
  • Stability criteria
  • learning analytics (LA)
  • learning management system (LMS)

Fingerprint

Dive into the research topics of 'Boundary conditions of generalizing predictive models for academic performance: within cohort versus within course'. Together they form a unique fingerprint.

Cite this