Special issue on multi-objective reinforcement learning

Madalina Drugan, Marco Wiering, Peter Vamplew, Madhu Chetty

Research output: Contribution to journalEditorialAcademicpeer-review

7 Citations (Scopus)


Many real-life problems involve dealing with multiple objectives. For example, in network routing the criteria may consist of energy consumption, latency, and channel capacity, which are in essence conflicting objectives. As in many problems there may be multiple (conflicting) objectives, there usually does not exist a single optimal solution. In those cases, it is desirable to obtain a set of trade-off solutions between the objectives. This problem has in the last decade also gained the attention of many researchers in the field of reinforcement learning (RL). RL addresses sequential decision problems in initially (possibly) unknown stochastic environments. The goal is the maximization of the agent's reward in an environment that is not always completely observable. The purpose of this special issue is to obtain a broader picture on the algorithmic techniques at the confluence between multi-objective optimization and reinforcement learning. The growing interest in multi-objective reinforcement learning (MORL) was reflected in the quantity and quality of submissions received for this special issue. After a rigorous review process, seven papers were accepted for publication, and they reflect the diversity of research being carried out within this emerging field of research. The accepted papers consider many different aspects of algorithmic design and the evaluation and this editorial puts them in a unified framework.

Original languageEnglish
Pages (from-to)1-2
Number of pages2
Publication statusPublished - 8 Nov 2017


Dive into the research topics of 'Special issue on multi-objective reinforcement learning'. Together they form a unique fingerprint.

Cite this