TY - JOUR
T1 - Comfortable and Energy-Efficient Speed Control of Autonomous Vehicles on Rough Pavements using Deep Reinforcement Learning
AU - Du, Y.
AU - Chen, Jing
AU - Zhao, Cong
AU - Liu, Chenglong
AU - Liao, Feixiong
AU - Chan, C.Y.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Rough pavements cause ride discomfort and energy inefficiency for road vehicles. Existing methods to address these problems are time-consuming and not adaptive to changing driving conditions on rough pavements. With the development of sensor and communication technologies, crowdsourced road and dynamic traffic information become available for enhancing driving performance, particularly addressing the discomfort and inefficiency issues by controlling driving speeds. This study proposes a speed control framework on rough pavements, envisioning the operation of autonomous vehicles based on the crowdsourced data. We suggest the concept of ‘maximum comfortable speed’ for representing the vertical ride comfort of oncoming roads. A deep reinforcement learning (DRL) algorithm is designed to learn comfortable and energy-efficient speed control strategies. The DRL-based speed control model is trained using real-world rough pavement data in Shanghai, China. The experimental results show that the vertical ride comfort, energy efficiency, and computation efficiency increase by 8.22%, 24.37%, and 94.38%, respectively, compared to an optimization-based speed control model. The results indicate that the proposed framework is effective for real-time speed controls of autonomous vehicles on rough pavements.
AB - Rough pavements cause ride discomfort and energy inefficiency for road vehicles. Existing methods to address these problems are time-consuming and not adaptive to changing driving conditions on rough pavements. With the development of sensor and communication technologies, crowdsourced road and dynamic traffic information become available for enhancing driving performance, particularly addressing the discomfort and inefficiency issues by controlling driving speeds. This study proposes a speed control framework on rough pavements, envisioning the operation of autonomous vehicles based on the crowdsourced data. We suggest the concept of ‘maximum comfortable speed’ for representing the vertical ride comfort of oncoming roads. A deep reinforcement learning (DRL) algorithm is designed to learn comfortable and energy-efficient speed control strategies. The DRL-based speed control model is trained using real-world rough pavement data in Shanghai, China. The experimental results show that the vertical ride comfort, energy efficiency, and computation efficiency increase by 8.22%, 24.37%, and 94.38%, respectively, compared to an optimization-based speed control model. The results indicate that the proposed framework is effective for real-time speed controls of autonomous vehicles on rough pavements.
KW - Autonomous vehicle
KW - Deep reinforcement learning
KW - Energy efficiency
KW - Ride comfort
KW - Speed control
UR - https://www.scopus.com/pages/publications/85120668453
U2 - 10.1016/j.trc.2021.103489
DO - 10.1016/j.trc.2021.103489
M3 - Article
SN - 0968-090X
VL - 134
JO - Transportation Research. Part C: Emerging Technologies
JF - Transportation Research. Part C: Emerging Technologies
M1 - 103489
ER -