Exam WS 2019-20
There will be oral retake exams for those who already failed once in ML (5.0). If you have already one attempt in ML, you cannot take Neurocomputing instead.
Oral exams: write an email to juliendotvitayatinformatikdottu-chemnitzdotde to get an appointment before 24.01.2020. Mention your preferred period for the oral exams (during the exam period or later in March).
The Machine Learning (573050) course will not be offered anymore, starting from the winter semester 2019-2020.
It is replaced by the Neurocomputing (573180) course: https://www.tu-chemnitz.de/informatik/KI/edu/neurocomputing.
All students having Machine Learning in their study program (Studienordnung) can take Neurocomputing as a replacement, as confirmed by the respective study commissions. Erasmus students can simply modify their learning agreement accordingly.
The content of Machine Learning and Neurocomputing strongly overlap. Only reinforcement learning has been moved to a new course called Deep Reinforcement Learning (573140) https://www.tu-chemnitz.de/informatik/KI/edu/deeprl, available for a selection of Master programs (Informatik, Neurorobotik, Data Science).
There is an open topic on large-scale statistical face modeling, in co-supervision with Martin Grewe from the Zuse Institut Berlin (ZIB): pdf. Contact Julien Vitay if you are interested.
- Supervised learning
- Linear algorithms (regression, classification, softmax, maximum likelihood)
- Learning Theory (cross-validation, VC dimension, feature space)
- Neural Networks (MLP, regularization)
- Support vector machines (maximum margin classifier, kernel trick)
- Deep Learning (CNN, GAN)
- Recurrent neural networks (LSTM, GRU)
- Reinforcement Learning
- Formal definition of the RL-Problem (Markov Decision Processes)
- Dynamic Programming, Monte Carlo Methods
- Temporal Difference Learning (TD, Q-learning), Eligibility traces
- Deep Reinforcement learning (DQN, A3C, DDPG)
Previous SlidesChapter 01 - Introduction (pdf)
Chapter 02 - Linear learning machines (pdf)
Chapter 03 - Learning theory (pdf)
Chapter 04 - Neural networks (pdf)
Chapter 05 - Support-vector machines (pdf)
Chapter 06 - Deep learning (pdf)
Chapter 07 - Recurrent neural networks (pdf)
Chapter 08 - Reinforcement Learning (pdf)
Chapter 09 - Deep Reinforcement Learning (pdf)
Bonus - Introduction to the game of Go (pdf)
To use Jupyter notebooks in the B202, follow these guidelines (pdf).Exercise 01 - Introduction to Python and NumPy. (questions , solution )
Exercise 02 - Linear classification. (questions , solution )
Exercise 03 - Cross-validation. (questions , solution )
Exercise 04 - Multi-layer perceptron. (questions , solution )
Exercise 05 - Multi-layer perceptron on the MNIST dataset. (questions , solution )
Exercise 06 - Support-vector machines. (questions , solution )
Exercise 07 - Convolutional neural networks. (questions , solution )
Exercise 08 - Transfer learning. (questions , solution )
Exercise 09 - Reinforcement learning (skipped). (questions , solution )
Exercise 10 - Q-learning and Gridworld. (questions , solution )