Deep Reinforcement Learning
|Vorlesung:||Dienstag,||15:30 - 17:00, 1/375 (A10.375)||(Dr. J. Vitay)|
|Übung:||Dienstag,||17:15 - 18:45, 1/B202 (A11.202)||(Dr. J. Vitay)|
Suggested prerequisites: Mathematics I to IV, Neurocomputing, basic knowledge in Python.
Exam: written examination (90 minutes), 5 ECTS.
Contact: julien dot vitay at informatik dot tu-chemnitz dot de.
Language: English. The exam can of course be done in German.
The course will dive into the field of deep reinforcement learning. It starts with the basics of reinforcement learning (Sutton and Barto, 2017) before explaining modern model-free architectures (DQN, DDPG, PPO) making use of deep neural networks for function approximation. More "exotic" forms of RL are then presented (successor representations, hierarchical RL, inverse RL, etc).
The different algorithms presented during the lectures will be studied in more details during the exercises, through implementations in Python.
The preliminary plan of the course is:
- Reinforcement Learning (MDP, dynamic programming, Monte-Carlo methods, temporal difference)
- Value-based deep RL (DQN)
- Policy gradient methods (A3C, DDPG, TRPO, PPO)
- Model-based RL (Dyna Q, AlphaGo, I2A)
- Successor representations
- Hierarchical RL
- Inverse RL
- Multi-agent RL
- How do I register for the course?
You can register on OPAL: https://bildungsportal.sachsen.de/opal/auth/RepositoryEntry/21637267457.
- How do I register for the exam?
Registration on SBService happens in December. Only registered students can participate to the exam.
- I cannot assist to the exercises. Can I take the exam anyway?
Yes. The exercises are there to help you understand the concepts seen in the lectures and get practical experience with neural networks. But they are not obligatory for the exam.
- Do I have to memorize all these equations?
No, but to understand them, which is basically the same.