Who is where? Matching people in video to wearable acceleration during crowded mingling events

Laura Cabrera-Quiros, Hayley Hung

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

10 Citations (Scopus)


We address the challenging problem of associating acceleration data from a wearable sensor with the corresponding spatio-temporal region of a person in video during crowded mingling scenarios. This is an important first step for multisensor behavior analysis using these two modalities. Clearly, as the numbers of people in a scene increases, there is also a need to robustly and automatically associate a region of the video with each person's device. We propose a hierarchical association approach which exploits the spatial context of the scene, outperforming the state-of-the-art approaches significantly. Moreover, we present experiments on matching from 3 to more than 130 acceleration and video streams which, to our knowledge, is significantly larger than prior works where only up to 5 device streams are associated.

Original languageEnglish
Title of host publicationMM 2016 - Proceedings of the 2016 ACM Multimedia Conference
PublisherAssociation for Computing Machinery, Inc
Number of pages5
ISBN (Electronic)9781450336031
Publication statusPublished - 2016
Event24th ACM Multimedia Conference, MM 2016 - Amsterdam, United Kingdom
Duration: 15 Oct 201619 Oct 2016


Conference24th ACM Multimedia Conference, MM 2016
Country/TerritoryUnited Kingdom


  • Association
  • Computer vision
  • Mingling
  • Wearable sensor


Dive into the research topics of 'Who is where? Matching people in video to wearable acceleration during crowded mingling events'. Together they form a unique fingerprint.

Cite this