Environmental perception plays a critical role in self-driving vehicles and robotics. In order to navigate an environment safely and take appropriate actions, it is essential to be able to sense the environment and correctly interpret the sensory data. When using vision-based data to achieve this, i.e., trying to understand an image or video, we call this computer vision. Computer vision is a broad discipline, and consists of various different tasks, such as object detection, image generation and facial recognition. Over the last decade, the performance of computer vision tasks has been greatly accelerated by the development of deep learning-based methods. For vision-based data, deep learning involves Convolutional Neural Networks (CNNs) that try mimic the visual-cognitive processes of the human brain (to a certain extent). In recent years, many CNN architectures have been introduced that continue to push the state-of-the-art. This course is focused on deep learning in the context of self-driving vehicles and robotics. During the course, students will work on a project in which they will try to solve a real-world computer vision task using deep learning. This means that they will go through the entire process of researching, designing, implementing, and evaluating their CNN. During these course, there will be weekly meetings, but no classical lectures.