Learning robot vision for assisted living

Wenjie Yan, Nils Meins, Elena Torta, Cornelius Weber, David van der Pol, Raymond H. Cuijpers, Stefan Wermter

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a robot navigates to a person for conveying information. Indoor robot navigation is a challenging task due to the complexity of real-home environments and the need of online learning abilities to adjust for dynamic conditions. A comparison between systems with different sensor typologies shows that visionbased systems promise to provide good performance and a wide scope of usage at reasonable cost. Moreover, vision-based systems can perform different tasks simultaneously by applying different algorithms to the input data stream thus enhancing the flexibility of the system. The authors introduce the state of the art of several computer vision methods for realizing indoor robotic navigation to a person and human-robot interaction. A case study has been conducted in which a robot, which is part of an AAL system, navigates to a person and interacts with her. The authors evaluate this test case and give an outlook on the potential of learning robot vision in ambient homes.

Original languageEnglish
Title of host publicationImage Processing
Subtitle of host publicationConcepts, Methodologies, Tools, and Applications
PublisherIGI Global
Chapter62
Pages1232-1255
Number of pages24
Volume3-3
ISBN (Electronic)9781466639959
ISBN (Print)1466639946, 9781466639942
DOIs
Publication statusPublished - 31 May 2013

Fingerprint

Dive into the research topics of 'Learning robot vision for assisted living'. Together they form a unique fingerprint.

Cite this