Abstract
This paper describes a novel multi-objective reinforcement learning algorithm. The proposed algorithm first learns a model of the multi-objective sequential decision making problem, after which this learned model is used by a multi-objective dynamic programming method to compute Pareto optimal policies. The advantage of this model-based multi-objective reinforcement learning method is that once an accurate model has been estimated from the experiences of an agent in some environment, the dynamic programming method will compute all Pareto optimal policies. Therefore it is important that the agent explores the environment in an intelligent way by using a good exploration strategy. In this paper we have supplied the agent with two different exploration strategies and compare their effectiveness in estimating accurate models within a reasonable amount of time. The experimental results show that our method with the best exploration strategy is able to quickly learn all Pareto optimal policies for the Deep Sea Treasure problem.
| Original language | English |
|---|---|
| Title of host publication | 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 9-12 December 2014, Orlando, Florida |
| Place of Publication | Piscataway |
| Publisher | Institute of Electrical and Electronics Engineers |
| Pages | 1-6 |
| ISBN (Electronic) | 978-1-4799-4552-8 |
| ISBN (Print) | 9781479945535 |
| DOIs | |
| Publication status | Published - 14 Jan 2014 |
| Externally published | Yes |
| Event | 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2014) - Orlando, United States Duration: 9 Dec 2014 → 12 Dec 2014 |
Conference
| Conference | 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2014) |
|---|---|
| Abbreviated title | ADPRL 2014 |
| Country/Territory | United States |
| City | Orlando |
| Period | 9/12/14 → 12/12/14 |
Fingerprint
Dive into the research topics of 'Model-based multi-objective reinforcement learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver