Shared-bed person segmentation based on motion estimation

Xuyuan Jin, Adrienne Heinrich, Caifeng Shan, Gerard De Haan

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Video-based sleep analysis is a topic with important applications, and shared-bed occurs frequently in the context of sleep. One difficulty for the shared-bed situation is to assign the movements to the correct person because they can occur in close proximity and even overlapping. To manage to achieve person segmentation in the shared-bed situation, in this paper we propose an approach to correctly segment the region of persons based on motion estimation. In our approach, considering the consistency of the motion vectors, specifically their length and angle, the adjacent blocks are clustered. The generated clusters are then assigned to a person according to temporal correlation. The occupied region of the person is updated each frame based on the assignment result of the clusters. The proposed approach tackles the segmentation issue when the two persons are close to each other or even overlap, and the accuracy of the segmentation is beyond 82% in the data set we acquired.

Original languageEnglish
Title of host publication2012 IEEE International Conference on Image Processing, ICIP 2012 - Proceedings
Place of PublicationPiscataway
PublisherInstitute of Electrical and Electronics Engineers
Pages137-140
Number of pages4
ISBN (Print)9781467325332
DOIs
Publication statusPublished - 1 Dec 2012
Event19th IEEE International Conference on Image Processing (ICIP 2012) - Lake Buena Vista, FL, United States
Duration: 30 Sep 20123 Oct 2012
Conference number: 19

Conference

Conference19th IEEE International Conference on Image Processing (ICIP 2012)
Abbreviated titleICIP 2012
Country/TerritoryUnited States
CityLake Buena Vista, FL
Period30/09/123/10/12

Keywords

  • Motion Estimation
  • Video-based Segmentation

Fingerprint

Dive into the research topics of 'Shared-bed person segmentation based on motion estimation'. Together they form a unique fingerprint.

Cite this