Abstract
Wearable cameras allow users to record their daily activities from a user-centered (First Person Vision) perspective. Due to their favourable location, they frequently capture the hands of the user, and may thus represent a promising user-machine interaction tool for different applications. Existent First Person Vision, methods understand the hands as a background/foreground segmentation problem that ignores two important issues: (i) Each pixel is sequentially classified creating a long processing queue, (ii) Hands are not a single “skin-like” moving element but a pair of interacting entities (left-right hand). This paper proposes a GPU-accelerated implementation of a left right-hand segmentation algorithm. The GPU implementation exploits the nature of the pixel-by-pixel classification strategy. The left-right identification is carried out by following a competitive likelihood test based the position and the angle of the segmented pixels.
Original language | English |
---|---|
Title of host publication | Computer Vision - ECCV 2016 Workshops, Proceedings, 8-10/15-16 October 2016, Amsterdam, The Netherlands |
Editors | G. Hua, H. Jegou |
Place of Publication | Dordrecht |
Publisher | Springer |
Pages | 504-517 |
Number of pages | 14 |
ISBN (Electronic) | 978-3-319-46604-0 |
ISBN (Print) | 9783319466033 |
DOIs | |
Publication status | Published - 2016 |
Event | 14th European Conference on Computer Vision (ECCV 2016) - Amsterdam, Netherlands Duration: 8 Oct 2016 → 16 Oct 2016 Conference number: 14 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 9913 |
ISSN (Print) | 03029743 |
ISSN (Electronic) | 16113349 |
Conference
Conference | 14th European Conference on Computer Vision (ECCV 2016) |
---|---|
Abbreviated title | ECCV 2016 |
Country/Territory | Netherlands |
City | Amsterdam |
Period | 8/10/16 → 16/10/16 |
Keywords
- Egovision
- GPU
- Hand-detection
- Hand-segmentation
- Wearable cameras