Samenvatting
Recent Deep Reinforcement Learning (DRL) techniques have advanced solutions to Vehicle Routing Problems (VRPs). However, many of these methods focus exclusively on optimizing distance-oriented objectives (i.e., minimizing route length), often overlooking the implicit drivers' preferences for routes. These preferences, which are crucial in practice, are challenging to model using traditional DRL approaches. To address this gap, we propose a preference-based DRL method characterized by its reward design and optimization objective, which is specialized to learn historical route preferences. Our experiments demonstrate that the method aligns generated solutions more closely with human preferences. Moreover, it exhibits strong generalization performance across a variety of instances, offering a robust solution for different VRP scenarios.
| Originele taal-2 | Engels |
|---|---|
| Titel | Proceedings of the 34th International Joint Conference on Artificial Intelligence, IJCAI 2025 |
| Redacteuren | James Kwok |
| Uitgeverij | International Joint Conferences on Artificial Intelligence (IJCAI) |
| Pagina's | 8591-8599 |
| Aantal pagina's | 9 |
| ISBN van elektronische versie | 9781956792065 |
| DOI's | |
| Status | Gepubliceerd - 2025 |
| Evenement | 34th Internationa Joint Conference on Artificial Intelligence, IJCAI 2025 - Montreal, Canada Duur: 16 aug. 2025 → 22 aug. 2025 |
Congres
| Congres | 34th Internationa Joint Conference on Artificial Intelligence, IJCAI 2025 |
|---|---|
| Land/Regio | Canada |
| Stad | Montreal |
| Periode | 16/08/25 → 22/08/25 |
Bibliografische nota
Publisher Copyright:© 2025 International Joint Conferences on Artificial Intelligence. All rights reserved.
Vingerafdruk
Duik in de onderzoeksthema's van 'Preference-based Deep Reinforcement Learning for Historical Route Estimation'. Samen vormen ze een unieke vingerafdruk.Citeer dit
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver