Scalarized multi-objective reinforcement learning : novel design techniques

K. Van Moffaert, M.M. Drugan, A. Nowe

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

71 Citations (Scopus)


In multi-objective problems, it is key to find compromising solutions that balance different objectives. The linear scalarization function is often utilized to translate the multi-objective nature of a problem into a standard, single-objective problem. Generally, it is noted that such as linear combination can only find solutions in convex areas of the Pareto front, therefore making the method inapplicable in situations where the shape of the front is not known beforehand, as is often the case. We propose a non-linear scalarization function, called the Chebyshev scalarization function, as a basis for action selection strategies in multi-objective reinforcement learning. The Chebyshev scalarization method overcomes the flaws of the linear scalarization function as it can (i) discover Pareto optimal solutions regardless of the shape of the front, i.e. convex as well as non-convex , (ii) obtain a better spread amongst the set of Pareto optimal solutions and (iii) is not particularly dependent on the actual weights used.
Original languageEnglish
Title of host publication 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL) : 16 - 19 April 2013, Singapore
Place of PublicationPiscataway
PublisherInstitute of Electrical and Electronics Engineers
ISBN (Print)978-1-4673-5925-2
Publication statusPublished - 2013
Event2013 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2013) - Singapore, Singapore
Duration: 16 Apr 201319 Apr 2013


Other2013 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2013)
Abbreviated titleADPRL 2013

Cite this