Sensing the environment around a self-driving vehicle or robot, requires giving real-world significance to sensory signals. We call this semantic data interpretation. For example, the vehicle must be able to decide whether a pixel in an image belongs to a tree or to a pedestrian. For this the state-of-the-art is using the paradigm called deep learning. For vision-based data, deep learning involves Convolutional Neural Networks (CNNs) that mimic the visual-cognitive processes of the human brain (to a certain extent). The capabilities of deep learning for semantic visual data interpretation are unprecedented and all top-ranked methods on scientific benchmarks are based on this paradigm. This course is focussed on deep learning for automotive and robot applications, specifically on convolutional neural networks to interpret visual data recorded by in-vehicle cameras. The course is mainly based on self-study, using renowned online lectures of Stanford University and MIT. Furthermore, during this course, students will work on an individual project involving convolutional neural networks and design, implement, and evaluate their sensing system for a real proto-type self-driving vehicle or robot.