A comparison of two analytical evaluation methods for educational computer games for young children

    Research output: Contribution to journalArticleAcademicpeer-review

    11 Citations (Scopus)
    1 Downloads (Pure)

    Abstract

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the fun-related concepts from Malone and Lepper) with both usability and fun heuristics for children’s computer games. To verify SEEM’s relative quality, a study was set up in which adult evaluators predicted problems in computer games. Outcomes based on thoroughness (whether the analytical method finds all problems), validity (whether the analytical method uncovers problems that are likely to be true) and appropriateness (whether the method is applied correctly) are compared. The results show that both the thoroughness and validity of SEEM are higher than the thoroughness and validity of the Combined HE. The appropriateness scores indicate that SEEM gives evaluators more guidance when predicting problems than the Combined HE does.
    Original languageEnglish
    Pages (from-to)129-140
    JournalCognition, Technology & Work
    Volume10
    Issue number2
    DOIs
    Publication statusPublished - 2008

    Fingerprint

    Dive into the research topics of 'A comparison of two analytical evaluation methods for educational computer games for young children'. Together they form a unique fingerprint.

    Cite this