AbstractIn recent years autonomous driving is becoming a hot topic with the continuous development of deep learning, computer vision and sensor technology. It comes with robustness and quality of control issues when an Image-Based Control system is targeted to edge devices with limited energy, memory and computing resources. Using end-to-end CNN-based approaches can improve robustness, however, will increase runtime and result in a low frame per second. On the other hand, the traditional hardware-efficient approaches are lack of situation-awareness. Different environmental factors (e.g. road layouts, types of lane markers and weather) have a great impact on lane detection accuracy, and thus influence the quality of control. As a result, traditional approaches can not ensure robustness in the real world.
In this work, we propose a hardware- and situation-aware sensing method to a lane-keeping assist system with the traditional lane detection algorithm to make the system both hardware-efficient and robust. We define situations based on different features and identify them using light-weight CNN-based situation classifiers. Depending on the current situation, we dynamically configure the system knobs based on hardware- and situation-aware characterization. To show the effectiveness of our approach, we consider a hardware-in-the-loop framework on NVIDIA AGX Xavier platform. Besides, Webots is used as the simulation environment on the server.
Our results show that the robustness is highly improved comparing to traditional approaches. Moreover, the quality of control is also 32% better due to the approximated image processing signal pipeline and predefined invocation scheme.
|Date of Award||16 Dec 2020|
|Supervisor||Dip Goswami (Supervisor 1), Sajid Mohamed (Supervisor 2) & Sayandip De (Supervisor 2)|