On boosting semantic street scene segmentation with weak supervision

Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review


Training convolutional networks for semantic segmentation requires per-pixel ground truth labels, which are very time consuming and hence costly to obtain. Therefore, in this work, we research and develop a hierarchical deep network architecture and the corresponding loss for semantic segmentation that can be trained from weak supervision, such as bounding boxes or image level labels, as well as from strong per-pixel supervision. We demonstrate that the hierarchical structure and the simultaneous training on strong (per-pixel) and weak (bounding boxes) labels, even from separate datasets, consistently increases the performance against per-pixel only training. Moreover, we explore the more challenging case of adding weak image-level labels. We collect street scene images and weak labels from the immense Open Images dataset to generate the OpenScapes dataset, and we use this novel dataset to increase segmentation performance on two established per-pixel labeled datasets, Cityscapes and Vistas. We report performance gains up to +13.2% mIoU on crucial street scene classes, and inference speed of 20 fps on a Titan V GPU for Cityscapes at 512 × 1024 resolution. Our network and OpenScapes dataset are shared with the research community.

Originele taal-2Engels
Titel2019 IEEE Intelligent Vehicles Symposium, IV 2019
Plaats van productiePiscataway
UitgeverijInstitute of Electrical and Electronics Engineers
Aantal pagina's6
ISBN van elektronische versie978-1-7281-0560-4
StatusGepubliceerd - jun 2019
Evenement2019 IEEE Intelligent Vehicles Symposium (IV) - Paris, Frankrijk
Duur: 9 jun 201912 jun 2019


Congres2019 IEEE Intelligent Vehicles Symposium (IV)

Vingerafdruk Duik in de onderzoeksthema's van 'On boosting semantic street scene segmentation with weak supervision'. Samen vormen ze een unieke vingerafdruk.

Citeer dit