Abstract
This paper studies the multi-item stochastic capacitated lot-sizing problem with stationary demand to minimise set-up, holding, and backorder costs. This is a common problem in the industry, concerning both inventory management and production planning. We study the applicability of the Proximal Policy Optimisation (PPO) algorithm in this problem, which is a type of Deep Reinforcement Learning (DRL). The problem is modelled as a Markov Decision Process (MDP), which can be solved to optimality in small problem instances by using Dynamic Programming. In these settings, we show that the performance of PPO approaches the optimal solution. For larger problem instances with an increasing number of products, solving to optimality is intractable, and we demonstrate that the PPO solution outperforms the benchmark solution. Several adjustments to the standard PPO algorithm are implemented to make it more scalable to larger problem instances. We show the linear growth in computation time for the algorithm, and present a method for explaining the outcomes of the algorithm. We suggest future research directions that could improve the scalability and explainability of the PPO algorithm.
Original language | English |
---|---|
Article number | 6 |
Pages (from-to) | 1955-1978 |
Number of pages | 24 |
Journal | International Journal of Production Research |
Volume | 61 |
Issue number | 6 |
Early online date | 7 Apr 2022 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Capacitated lot sizing problem
- deep reinforcement learning
- multi-item
- proximal policy optimisation
- stochastic demand