Crowdsourcing empathetic intelligence: the case of the annotation of EMMA database for emotion and mood recognition

C. Katsimerou, J. Albeda, A. Huldtgren, I.E.J. Heynderickx, J.A. Redi

Research output: Contribution to journalArticleAcademicpeer-review

7 Citations (Scopus)

Abstract

Unobtrusive recognition of the user's mood is an essential capability for affect-adaptive systems. Mood is a subtle, long-term affective state, often misrecognized even by humans. The challenge to train a machine to recognize it from, for example, a video of the user, is significant, and already begins with the lack of ground truth for supervised learning. Existing affective databases consist mainly of short videos, annotated in terms of expressed emotions rather than mood. In very few cases, we encounter perceived mood annotations, of questionable reliability, however, due to the subjectivity of mood estimation and the small number of coders involved. In this work, we introduce a new database for mood recognition from video. Our database contains 180 long, acted videos, depicting typical daily scenarios, and subtle facial and bodily expressions. The videos cover three visual modalities (face, body, Kinect data), and are annotated in terms of emotions (via G-trace) and mood (via the Self-Assessment Manikin and the AffectButton). To annotate the database exhaustively, we exploit crowdsourcing to reach out to an extensive number of nonexpert coders. We validate the reliability of our crowdsourced annotations by (1) adopting a number of criteria to filter out unreliable coders, and (2) comparing the annotations of a subset of our videos with those collected in a controlled lab setting.
Original languageEnglish
Article number51
Pages (from-to)1-27
Number of pages27
JournalACM Transactions on Intelligent Systems and Technology
Volume7
Issue number4
DOIs
Publication statusPublished - 2016

Fingerprint

Dive into the research topics of 'Crowdsourcing empathetic intelligence: the case of the annotation of EMMA database for emotion and mood recognition'. Together they form a unique fingerprint.

Cite this