Policies for the dynamic traveling maintainer problem with alerts

Research output: Contribution to journalArticleAcademicpeer-review

29 Downloads (Pure)


Downtime of industrial assets such as wind turbines and medical imaging devices comes at a sharp cost. To avoid such downtime costs, companies seek to initiate maintenance just before failure. Unfortunately, this is challenging for the following two reasons: On the one hand, because asset failures are notoriously difficult to predict, even in the presence of real-time monitoring devices which signal early degradation. On the other hand, because the available resources to serve a network of geographically dispersed assets are typically limited. In this paper, we propose a novel model referred to as the dynamic traveling maintainer problem with alerts that incorporates these two challenges and we provide three solution approaches on how to dispatch the limited resources. Namely, we propose: (i) Greedy heuristic approaches that rank assets on urgency, proximity and economic risk; (ii) A novel traveling maintainer heuristic approach that optimizes short-term costs; and (iii) A deep reinforcement learning (DRL) approach that optimizes long-term costs. Each approach has different requirements concerning the available alert information. Experiments with small asset networks show that all methods can approximate the optimal policy when given access to complete condition information. For larger networks, the proposed methods yield competitive policies, with DRL consistently achieving the lowest costs.
Original languageEnglish
Pages (from-to)1141-1152
Number of pages12
JournalEuropean Journal of Operational Research
Issue number3
Publication statusPublished - 16 Mar 2023


  • Decision process
  • Deep reinforcement learning
  • Degradation process
  • Maintenance
  • Traveling maintainer problem


Dive into the research topics of 'Policies for the dynamic traveling maintainer problem with alerts'. Together they form a unique fingerprint.

Cite this