Predicting mood from punctual emotion annotations on videos

C. Katsimerou, I.E.J. Heynderickx, J.A. Redi

Research output: Contribution to journalArticleAcademicpeer-review

18 Citations (Scopus)
1 Downloads (Pure)

Abstract

A smart environment designed to adapt to a user's affective state should be able to decipher unobtrusively that user's underlying mood. Great effort has been devoted to automatic punctual emotion recognition from visual input. Conversely, little has been done to recognize longer-lasting affective states, such as mood. Taking for granted the effectiveness of emotion recognition algorithms, we propose a model for estimating mood from a known sequence of punctual emotions. To validate our model experimentally, we rely on the human annotations of two well-established databases: the VAM and the HUMAINE. We perform two analyses: the first serves as a proof of concept and tests whether punctual emotions cluster around the mood in the emotion space. The results indicate that emotion annotations, continuous in time and value, facilitate mood estimation, as opposed to discrete emotion annotations scattered randomly within the video timespan. The second analysis explores factors that account for the mood recognition from emotions, by examining how individual human coders perceive the underlying mood of a person. A moving average function with exponential discount of the past emotions achieves mood prediction accuracy above 60 percent, which is higher than the chance level and higher than mutual human agreement.

Original languageEnglish
Pages (from-to)179-192
Number of pages14
JournalIEEE Transactions on Affective Computing
Volume6
Issue number2
DOIs
Publication statusPublished - 1 Apr 2015

Keywords

  • affective computing
  • automatic mood recognition
  • Emotion recognition
  • pervasive technology

Fingerprint

Dive into the research topics of 'Predicting mood from punctual emotion annotations on videos'. Together they form a unique fingerprint.

Cite this