TY - UNPB
T1 - A Resource Efficient Fusion Network for Object Detection in Bird's-Eye View using Camera and Raw Radar Data
AU - Chandrasekaran, Kavin
AU - Grigorescu, Sorin
AU - Dubbelman, Gijs
AU - Jancura, Pavol
N1 - IEEE Intelligent Transportation Systems Conference (ITSC) 2024
PY - 2024/11/20
Y1 - 2024/11/20
N2 - Cameras can be used to perceive the environment around the vehicle, while affordable radar sensors are popular in autonomous driving systems as they can withstand adverse weather conditions unlike cameras. However, radar point clouds are sparser with low azimuth and elevation resolution that lack semantic and structural information of the scenes, resulting in generally lower radar detection performance. In this work, we directly use the raw range-Doppler (RD) spectrum of radar data, thus avoiding radar signal processing. We independently process camera images within the proposed comprehensive image processing pipeline. Specifically, first, we transform the camera images to Bird's-Eye View (BEV) Polar domain and extract the corresponding features with our camera encoder-decoder architecture. The resultant feature maps are fused with Range-Azimuth (RA) features, recovered from the RD spectrum input from the radar decoder to perform object detection. We evaluate our fusion strategy with other existing methods not only in terms of accuracy but also on computational complexity metrics on RADIal dataset.
AB - Cameras can be used to perceive the environment around the vehicle, while affordable radar sensors are popular in autonomous driving systems as they can withstand adverse weather conditions unlike cameras. However, radar point clouds are sparser with low azimuth and elevation resolution that lack semantic and structural information of the scenes, resulting in generally lower radar detection performance. In this work, we directly use the raw range-Doppler (RD) spectrum of radar data, thus avoiding radar signal processing. We independently process camera images within the proposed comprehensive image processing pipeline. Specifically, first, we transform the camera images to Bird's-Eye View (BEV) Polar domain and extract the corresponding features with our camera encoder-decoder architecture. The resultant feature maps are fused with Range-Azimuth (RA) features, recovered from the RD spectrum input from the radar decoder to perform object detection. We evaluate our fusion strategy with other existing methods not only in terms of accuracy but also on computational complexity metrics on RADIal dataset.
KW - cs.CV
KW - cs.AI
U2 - 10.48550/arXiv.2411.13311
DO - 10.48550/arXiv.2411.13311
M3 - Preprint
VL - 2411.13311
BT - A Resource Efficient Fusion Network for Object Detection in Bird's-Eye View using Camera and Raw Radar Data
PB - arXiv.org
ER -