Unsupervised understanding of location and illumination changes in egocentric videos

A. Betancourt, N. Diaz-Rodriguez, E.I. Barakova, L. Marcenaro, G.W.M. Rauterberg, C.S. Regazzoni

    Research output: Contribution to journalArticleAcademicpeer-review

    3 Citations (Scopus)
    1 Downloads (Pure)

    Abstract

    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.
    Original languageEnglish
    Pages (from-to)414-429
    Number of pages18
    JournalPervasive and Mobile Computing
    Volume40
    DOIs
    Publication statusPublished - Sept 2017

    Keywords

    • Machine learning Unsupervised learning Egocentric videos First person vision Wearable camera
    • Wearable camera
    • Machine learning
    • First person vision
    • Egocentric videos
    • Unsupervised learning

    Fingerprint

    Dive into the research topics of 'Unsupervised understanding of location and illumination changes in egocentric videos'. Together they form a unique fingerprint.

    Cite this