Advanced Automotive Sensing

Cursus

URL study guide

https://tue.osiris-student.nl/onderwijscatalogus/extern/cursus?cursuscode=2PDASDAAS&collegejaar=2025&taal=en

Omschrijving

Learning objectives
The course objective is to provide trainees with an introduction to the most important sensors that current and future vehicles use to perceive their environment. At the end of the course, the trainees should be able to:
  • Describe the fundamental working of: RADAR, (stereo) Vision, and LiDAR
  • Explain the advantages and disadvantages of these sensors in an automotive context.
  • Provide geometrical models for camera sensors.
  • Provide mathematical models for representing sensing uncertainty with a vector valued normal distribution.
  • By hand, analytically calculate the Jacobian, i.e. the matrix of all first-order partial derivatives of a vector-valued function, to linearize a sensor model.
  • Describe and explain technical aspects of vision system components: i.e. lenses, imaging chips, stereo base-lines.
  • List and explain basic image operations and their usage in an automotive context: i.e. linear filters, thresholding, morphological operations, Hough transform.
  • Explain the working of a stereo camera and depth estimation algorithms.
  • Implement a basic lane detection algorithm in Matlab.
  • Implement a basic stereo obstacle detection algorithm in Matlab.
  • Explain the difference between generative versus discriminative models and between supervised and unsupervised models for pattern recognition.
  • Conceptually explain the steps in the Expectation Maximization (EM) algorithm for Gaussian mixture models.
  • Implement the (EM) algorithm in the context of data clustering in Matlab.
  • Explain the key concepts of the Support Vector Machines (SVM) and apply them in the context of classification in Matlab.
  • Explain the concepts behind the Histogram of Oriented Gradients (HOG) feature for object detection.
  • Describe the different layers in Convolutional Neural Networks and explain their usage.
  • The trainee can list and explain the advantages and disadvantages of Convolutional Neural Network architectures.
  • The trainee can list and explain the steps in training deep neural networks including big-data augmentation techniques.
  • The trainee can list and explain current research challenges related to CNNs for automotive and applications.
The trainee is able to design and train CNNs using Google TensorFlow.

Prior Knowledge
This post-masters course is developed for (international) trainees with a background in either classical Electrical Engineer, classical Mechanical Engineering, or a related field of studies which did not cover computer vision, pattern recognition, and deep learning (which at TU/e are taught at both the BSc. and MSc. level).
  • Knowledge on and experience with multi-variate probability theory
  • Knowledge on and experience with multi-variate (matrix) calculus
  • Experience with programming in C/C++ and/or Python and Matlab.
  • Experience with the Linux OS
Good English writing and speaking skills.
 

Doelstellingen

Sensing the environment around the vehicle is one of the most important aspects of Advanced Driver Assistance Systems (ADAS), as well as of future autonomous driving. In this module, we introduce the most important sensor modalities, i.e. RADAR, (stereo) Vision, and LiDAR in the context of automotive applications. Vision is threated in depth and the trainees are taught: (1) the use of  vision sensors in automotive applications, (2) the physical principles underlying vision sensors, (3) the mathematical models describing these physical principles, (4) how to derive first-order probabilistic models describing the accuracy of vision sensors, (5) how to process visual sensor data in automotive applications using software algorithms. Specifically, the trainees will develop a lane detection algorithm and a stereo depth estimation algorithm.
Sensing the environment around the vehicle requires giving real-world significance to sensory signals. We call this “semantic interpretation”. For example, the vehicle must be able to decide whether a pixel in an image belongs to a tree, a pedestrian, or to a lane marking. This semantic interpretation is done by pattern recognition software, of which the principles are taught in this module. We start with (1) the basic probabilistic models, (2) data clustering and classification using Expectation Maximization for Gaussian Mixture Models and Support Vector Machines, and we end with (3) Deep Learning. Furthermore, visual feature extraction is detailed, specifically, the Histogram of Oriented Gradients (HOG), as well as the principles of feature computations using Convolutional Neural Networks.
The self-study part of this module is focused on deep learning for vision-based automotive applications, specifically on convolutional neural networks to interpret visual data recorded by in-vehicle cameras. The course is mainly based on renowned online lectures of Stanford University and MIT. At the end of the module, the trainees give an individual presentation of a state-of-the-art research paper related to automotive sensing.
The teaching activities of this module are combined with a software design project on the topic of automotive sensing. Past projects included: (1) Developing an active lane keeping system by integrating an existing lane detector with a lane keeping controller and (2) developing an after-market ADAS device for commercial vehicles using the NVidia DriveWorks platform. In this project, the trainees will put the obtained knowledge into practice, as well as gain experience with project management
 

Beoordelingsmethode

Teacher evaluation
Cursusperiode1/09/1831/08/26
CursusformaatCursus