A Stereo Perception Framework for Autonomous Vehicles

Narsimlu Kemsaram, Anweshan Das, Gijs Dubbelman

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)


Stereo cameras are crucial sensors for self-driving vehicles as they are low-cost and can be used to estimate depth. It can be used for multiple purposes, such as object detection, depth estimation, semantic segmentation, etc. In this paper, we propose a stereo vision-based perception framework for autonomous vehicles. It uses three deep neural networks simultaneously to perform free-space detection, lane boundary detection, and object detection on image frames captured using the stereo camera. The depth of the detected objects from the vehicle is estimated from the disparity image computed using two stereo image frames from the stereo camera. The proposed stereo perception framework runs at 7.4 Hz on the Nvidia Drive PX 2 hardware platform, which further allows for its use in multi-sensor fusion for localization, mapping, and path planning by autonomous vehicle applications.
Original languageEnglish
Title of host publication2020 IEEE 91st Vehicular Technology Conference, VTC Spring 2020 - Proceedings
Number of pages6
ISBN (Electronic):978-1-7281-5207-3
Publication statusPublished - 30 Jun 2020
Event91st IEEE Vehicular Technology Conference (VTC2 2020-Spring ) - Antwerp, Belgium
Duration: 25 May 202028 May 2020


Conference91st IEEE Vehicular Technology Conference (VTC2 2020-Spring )
Abbreviated titleVTC2020-Spring


  • advanced driver assistance system
  • autonomous vehicle
  • deep neural network
  • depth estimation
  • free space detection
  • lane detection
  • object detection
  • stereo camera
  • stereo perception
  • stereo vision


Dive into the research topics of 'A Stereo Perception Framework for Autonomous Vehicles'. Together they form a unique fingerprint.

Cite this