Synthesizing and reconstructing missing sensory modalities in behavioral context recognition

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

16 Citaten (Scopus)
110 Downloads (Pure)

Samenvatting

Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders.

Originele taal-2Engels
Artikelnummer2967
Aantal pagina's20
TijdschriftSensors
Volume18
Nummer van het tijdschrift9
DOI's
StatusGepubliceerd - 6 sep. 2018

Vingerafdruk

Duik in de onderzoeksthema's van 'Synthesizing and reconstructing missing sensory modalities in behavioral context recognition'. Samen vormen ze een unieke vingerafdruk.

Citeer dit