Abstract
A smart environment designed to adapt to a user's affective state should be able to decipher unobtrusively that user's underlying mood. Great effort has been devoted to automatic punctual emotion recognition from visual input. Conversely, little has been done to recognize longer-lasting affective states, such as mood. Taking for granted the effectiveness of emotion recognition algorithms, we propose a model for estimating mood from a known sequence of punctual emotions. To validate our model experimentally, we rely on the human annotations of two well-established databases: the VAM and the HUMAINE. We perform two analyses: the first serves as a proof of concept and tests whether punctual emotions cluster around the mood in the emotion space. The results indicate that emotion annotations, continuous in time and value, facilitate mood estimation, as opposed to discrete emotion annotations scattered randomly within the video timespan. The second analysis explores factors that account for the mood recognition from emotions, by examining how individual human coders perceive the underlying mood of a person. A moving average function with exponential discount of the past emotions achieves mood prediction accuracy above 60 percent, which is higher than the chance level and higher than mutual human agreement.
Original language | English |
---|---|
Pages (from-to) | 179-192 |
Number of pages | 14 |
Journal | IEEE Transactions on Affective Computing |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Apr 2015 |
Keywords
- affective computing
- automatic mood recognition
- Emotion recognition
- pervasive technology