URL study guide
https://tue.osiris-student.nl/onderwijscatalogus/extern/cursus?cursuscode=4SC000&collegejaar=2025&taal=enDescription
Optimal control deals with engineering problems in which an objective function is to be minimized (or maximized) by sequentially choosing a set of actions that determine the behavior of a system. Examples of such problems include mixing two fluids in the least amount of time, maximizing the fuel efficiency of a hybrid vehicle, flying an unmanned air vehicle from point A to B while minimizing reference tracking errors and minimizing the lap time for a racing car. Other somewhat more surprising examples are: how to maximize the probability of win in blackjack and how to obtain minimum variance estimates of the pose of a robot based on noisy measurements.This course follows the formalism of dynamic programming, an intuitive and broad framework to model and solve optimal control problems. The material is introduced in a bottom-up fashion: the main ideas are first introduced for discrete optimization problems, then for stage decision problems, and finally for continuous-time control problems. For each class of problems, the course addresses how to cope with uncertainty and circumvent the difficulties in computing optimal solutions when these difficulties arise. Several applications in computer science, mechanical, electrical and automotive engineering are highlighted, as well as several connections to other disciplines, such as model predictive control, game theory, optimization, and frequency domain analysis. The course will also address how to solve optimal control problems when a model of the system is not available or it is not accurate, and optimal control inputs or decisions must be computed based on data.
The course is comprised of fifteen lectures. The following topics will be covered:
- Introduction and the dynamic programming algorithm
- Stochastic dynamic programming
- Shortest path problems in graphs
- Bayes filter and partially observable Markov decision processes
- State-feedback controller design for linear systems -LQR
- Optimal estimation and output feedback- Kalman filter and LQG
- Discretization
- Discrete-time Pontryagin’s maximum principle
- Approximate dynamic programming
- Q-Learning, Deep Q-Learning
- Policy iteration, Simulation-Based Policy Iteration, LSTD
- Actor-Critic methods, Policy Gradient
Objectives
After completing this course the student should be able to:- Model engineering problems of interest in the framework of optimal control.
- Solve optimal control problems using tools such as the Pontryagin’s maximum principle, the dynamic programming algorithm, and linear quadratic control.
- Design optimal state observers, such as the Kalman filter and the Bayes’ filter, for optimal control problems with partial state information.
- Choose efficient control strategies when the solution to the optimal control problem is hard to obtain/compute.
- Solve optimal control problems based on data, when the model is not accurate or not available.