Samenvatting
Reinforcement learning (RL) has seen significant research and application results but often requires large amounts of training data. This paper proposes two data-efficient off-policy RL methods that use parametrized Q-learning. In these methods, the Q-function is chosen to be linear in the parameters and quadratic in selected basis functions in the state and control deviations from a base policy. A cost penalizing the $\ell_1$-norm of Bellman errors is minimized. We propose two methods: Linear Matrix Inequality Q-Learning (LMI-QL) and its iterative variant (LMI-QLi), which solve the resulting episodic optimization problem through convex optimization. LMI-QL relies on a convex relaxation that yields a semidefinite programming (SDP) problem with linear matrix inequalities (LMIs). LMI-QLi entails solving sequential iterations of an SDP problem. Both methods combine convex optimization with direct Q-function learning, significantly improving learning speed. A numerical case study demonstrates their advantages over existing parametrized Q-learning methods.
Originele taal-2 | Engels |
---|---|
Titel | 2024 63rd IEEE Conference on Decision and Control (CDC) |
Uitgeverij | Institute of Electrical and Electronics Engineers |
Status | Geaccepteerd/In druk - 24 jul. 2024 |
Evenement | 63rd IEEE Annual Conference on Decision and Control, CDC 2024 - Milan, Italië Duur: 16 dec. 2024 → 19 dec. 2024 |
Congres
Congres | 63rd IEEE Annual Conference on Decision and Control, CDC 2024 |
---|---|
Land/Regio | Italië |
Stad | Milan |
Periode | 16/12/24 → 19/12/24 |