Abstract
Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a selfsupervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our selfsupervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP.
| Original language | English |
|---|---|
| Title of host publication | IS&T International Symposium on Electronic Imaging 2017, 29 january - 2 February 2017, Burlingame, California |
| Subtitle of host publication | Autonomous Vehicles and Machines |
| Place of Publication | San Francisco |
| Publisher | Society for Imaging Science and Technology (IS&T) |
| Pages | 54-61 |
| Number of pages | 8 |
| DOIs | |
| Publication status | Published - 29 Jan 2017 |