Deep Adaptive Multi-Intention Inverse Reinforcement Learning

Ariyan Bighashdel (Corresponding author), Panagiotis Meletis, Pavol Jancura, Gijs Dubbelman

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademic

75 Downloads (Pure)

Samenvatting

This paper presents a deep Inverse Reinforcement Learning (IRL) framework that can learn an a priori unknown number of nonlinear reward functions from unlabeled experts' demonstrations. For this purpose, we employ the tools from Dirichlet processes and propose an adaptive approach to simultaneously account for both complex and unknown number of reward functions. Using the conditional maximum entropy principle, we model the experts' multi-intention behaviors as a mixture of latent intention distributions and derive two algorithms to estimate the parameters of the deep reward network along with the number of experts' intentions from unlabeled demonstrations. The proposed algorithms are evaluated on three benchmarks, two of which have been specifically extended in this study for multi-intention IRL, and compared with well-known baselines. We demonstrate through several experiments the advantages of our algorithms over the existing approaches and the benefits of online inferring, rather than fixing beforehand, the number of expert's intentions.
Originele taal-2Engels
Artikelnummer2107.06692
Aantal pagina's20
TijdschriftarXiv
Volume2021
DOI's
StatusGepubliceerd - 14 jul. 2021

Bibliografische nota

Accepted for presentation at ECML/PKDD 2021

Vingerafdruk

Duik in de onderzoeksthema's van 'Deep Adaptive Multi-Intention Inverse Reinforcement Learning'. Samen vormen ze een unieke vingerafdruk.

Citeer dit