Semantic foreground inpainting from weak supervision

Chenyang Lu (Corresponding author), Gijs Dubbelman

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

Samenvatting

Semantic scene understanding is an essential task for self-driving vehicles and mobile robots. In our work, we aim to estimate a semantic segmentation map, in which the foreground objects are removed and semantically inpainted with background classes, from a single RGB image. This semantic foreground inpainting task is performed by a single-stage convolutional neural network (CNN) that contains our novel max-pooling as inpainting (MPI) module, which is trained with weak supervision, i.e., it does not require manual background annotations for the foreground regions to be inpainted. Our approach is inherently more efficient than the previous two-stage state-of-the-art method, and outperforms it by a margin of 3% IoU for the inpainted foreground regions on Cityscapes. The performance margin increases to 6% IoU, when tested on the unseen KITTI dataset. The code and the manually annotated datasets for testing are shared with the research community at https://github.com/Chenyang-Lu/semantic-foreground-inpainting .
Originele taal-2Engels
Artikelnummer8963753
Pagina's (van-tot)1334-1341
Aantal pagina's8
TijdschriftIEEE Robotics and Automation Letters
Volume5
Nummer van het tijdschrift2
DOI's
StatusGepubliceerd - apr 2020

    Vingerafdruk

Citeer dit