Harnessing traditional controllers for fast-track training of deep reinforcement learning control strategies

Dataset

Description

In recent years, Autonomous Ships have become a focal point for research, specifically emphasizing improving ship autonomy. Machine Learning Controllers, especially those based on Reinforcement Learning, have seen significant progress. However, addressing the substantial computational demands and intricate reward structures required for their training remains critical. This paper introduces a novel approach, “Harnessing Traditional Controllers for Fast-Track Training of Deep Reinforcement Learning Control Strategies,” aimed at bridging conventional maritime control methods with cutting-edge DRL techniques for vessels. This innovative approach explores the synergies between stable traditional controllers and adaptive DRL methodologies, known for their complexity handling capabilities. To tackle the time-intensive nature of DRL training, we propose a solution: utilizing existing traditional controllers to expedite DRL training by cloning behavior from these controllers to guide DRL exploration. We rigorously assess the effectiveness of this approach through various ship maneuvering scenarios, including different trajectories and external disturbances like winds. The results unequivocally demonstrate accelerated DRL training while maintaining stringent safety standards. This approach has the potential to bridge the gap between traditional maritime practices and contemporary DRL advancements, facilitating the seamless integration of autonomous systems into naval operations, with promising implications for enhanced vessel efficiency, cost-effectiveness, and overall safety.
Date made available18 Jun 2024
PublisherTaylor and Francis Ltd.

Cite this