An in-situ trainable gesture classifier

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

66 Downloads (Pure)


Gesture recognition, i.e., the recognition of pre-defined gestures by arm or hand movements, enables a natural extension of the way we currently interact with devices (Horsley, 2016). Commercially available gesture recognition systems are usually pre-trained: the developers specify a set of gestures, and the user is provided with an algorithm that can recognize just these gestures. To improve the user experience, it is often desirable to allow users to define their own gestures. In that case, the user needs to train the recognition system herself by a set of example gestures. Crucially, this scenario requires learning gestures from just a few training examples in order to avoid overburdening the user. We present a new in-situ trainable gesture classifier based on a hierarchical probabilistic modeling approach. Casting both learning and recognition as probabilistic inference tasks yields a principled way to design and evaluate algorithm candidates. Moreover, the Bayesian approach facilitates learning of prior knowledge about gestures, which leads to fewer needed examples for training new gestures.
Original languageEnglish
Title of host publicationBenelearn 2017: Proceedings of the Twenty-Sixth Benelux Conference on Machine Learning, Technische Universiteit Eindhoven, 9-10 June 2017
EditorsW. Duivesteijn, M. Pechenizkiy, G.H.L. Fletcher
Publication statusPublished - 10 Jun 2017
EventAnnual machine learning conference of the Benelux (Benelearn 2017) - Eindhoven, Netherlands
Duration: 9 Jun 201710 Jun 2017


ConferenceAnnual machine learning conference of the Benelux (Benelearn 2017)
Abbreviated titleBenelearn 2017
Internet address


Dive into the research topics of 'An in-situ trainable gesture classifier'. Together they form a unique fingerprint.

Cite this