A Resource Efficient Fusion Network for Object Detection in Bird's-Eye View using Camera and Raw Radar Data

Kavin Chandrasekaran, Sorin Grigorescu, Gijs Dubbelman, Pavol Jancura

Research output: Working paperPreprintAcademic

21 Downloads (Pure)

Abstract

Cameras can be used to perceive the environment around the vehicle, while affordable radar sensors are popular in autonomous driving systems as they can withstand adverse weather conditions unlike cameras. However, radar point clouds are sparser with low azimuth and elevation resolution that lack semantic and structural information of the scenes, resulting in generally lower radar detection performance. In this work, we directly use the raw range-Doppler (RD) spectrum of radar data, thus avoiding radar signal processing. We independently process camera images within the proposed comprehensive image processing pipeline. Specifically, first, we transform the camera images to Bird's-Eye View (BEV) Polar domain and extract the corresponding features with our camera encoder-decoder architecture. The resultant feature maps are fused with Range-Azimuth (RA) features, recovered from the RD spectrum input from the radar decoder to perform object detection. We evaluate our fusion strategy with other existing methods not only in terms of accuracy but also on computational complexity metrics on RADIal dataset.
Original languageEnglish
PublisherarXiv.org
Number of pages8
Volume2411.13311
DOIs
Publication statusPublished - 20 Nov 2024

Bibliographical note

IEEE Intelligent Transportation Systems Conference (ITSC) 2024

Fingerprint

Dive into the research topics of 'A Resource Efficient Fusion Network for Object Detection in Bird's-Eye View using Camera and Raw Radar Data'. Together they form a unique fingerprint.

Cite this