Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable devices. Current methods are focused on pixel-by-pixel hand segmentation, with the implicit assumption of hand presence in almost all activities. However, this assumption is false in many applications for wearable cameras. Ignoring this fact could affect the whole performance of the device since hand measurements are usually the starting point for higher level inference, or could lead to inef¿cient use of computational resources and battery power. In this paper we propose a two-level sequential classi¿er, in which the ¿rst level, a hand-detector, deals with the possible presence of hands from a global perspective, and the second level, a hand-segmentator, delineates the hand regions at pixel level in the cases indicated by the ¿rst block. The performance of the sequential classi¿er is stated in probabilistic notation as a combination of both, classi¿ers allowing to test new hand-detectors independently of the type of segmentation and the dataset used in the training stage. Experimental results show a considerable improvement in the detection of true negatives, without compromising the performance of the true positives.
|Title of host publication||2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 23-28 June 2014, Columbus, OH|
|Publisher||Institute of Electrical and Electronics Engineers|
|Publication status||Published - 2014|
|Event||2014 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2014 - Columbus, United States|
Duration: 23 Jun 2014 → 28 Jun 2014
|Conference||2014 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2014|
|Period||23/06/14 → 28/06/14|