Abstract
This work concentrates on vision processing for ADAS and intelligent vehicle applications. We propose a color extension to the disparity-based Stixel World method, so that the road can be robustly distinguished from obstacles with respect to erroneous disparity measurements. Our extension learns color appearance models for road and obstacle classes in an online and self-supervised fashion. The algorithm is tightly integrated within the core of the optimization process of the original Stixel World, allowing for strong fusion of the disparity and color signals. We perform an extensive evaluation, including different self-supervised learning strategies and different color models. Our newly recorded, publicly available data set is intentionally focused on challenging traffic scenes with many low-texture regions, causing numerous disparity artifacts. In this evaluation, we increase the F-score of the drivable distance from 0.86 to 0.97, compared to a tuned version of the state-of-the-art baseline method. This clearly shows that our color extension increases the robustness of the Stixel World, by reducing the number of falsely detected obstacles while not deteriorating the detection of true obstacles.
Original language | English |
---|---|
Title of host publication | Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 8-10 October 2014, Qingdao, China |
Place of Publication | Piscataway |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 1400-1407 |
ISBN (Print) | 978-1-4799-6077-4 |
DOIs | |
Publication status | Published - 2014 |