Navigation

Inhalt Hotkeys

Professur Künstliche Intelligenz

WS2017

Vorlesung: Mittwoch, 9.15 - 10.45, 1/316 (Prof. F. Hamker)
Übung: Dienstag, 17.15 - 18.45, 1/B202 (Dr. J. Baladron)
Übung: Mittwoch, 17.15 - 18.45, 1/B202 (Dr. J. Baladron)
Übung: Montag, 17.15 - 18.45, 1/B202 (F. Escudero)

Die Vorlesung beginnt am Mittwoch, dem 18.10.2017. Übungen beginnen am Montag, dem 9. 10. 2017. In den ersten beiden Übungswochen wird eine Einführung in die Programmierumgebung von Matlab gegeben.
The lectures will start at Wednesday, 18. 10. 2017. Exercises will start at Monday, 9. 10. 2017. During the first two weeks of exercises a introduction into the programming environment of matlab will be given.

Neurokognition I

Inhalte

Die Veranstaltung führt in die Modellierung neurokognitiver Vorgänge des Gehirns ein. Neurokognition ist ein Forschungsfeld, welches an der Schnittstelle zwischen Psychologie, Neurowissenschaft, Informatik und Physik angesiedelt ist. Es dient zum Verständnis des Gehirns auf der einen Seite und der Entwicklung intelligenter adaptiver Systeme auf der anderen Seite. In Neurokognition I werden vorwiegend realistische neuronale Modelle und Netzwerkeigenschaften sowie das Lernen vorgestellt. In den Übungen werden die Algorithmen der Vorlesung mittels einer Implementierung in MATLAB vertieft. Kennnisse in MATLAB sind keine Voraussetzung für die Teilnahme, sie können bei Bedarf vor dem Beginn erworben werden. Dafür werden ein Skript zum Selbststudium sowie zwei praktische Übungen in den ersten Vorlesungswochen angeboten.

Ziele: Dieses Modul vermittelt theoretische Konzepte und praktische Erfahrungen zur Entwicklung von Neuro-kognitiven Modellen.

Randbedingungen

Empfohlene Voraussetzungen: Grundkenntnisse Mathematik I bis IV

Prüfung: Mündliche Prüfung, 5 Leistungspunkte

Literatur

Dayan, P. & Abbott, L., Theoretical Neuroscience, MIT Press, 2001.


Syllabus

Introduction

The introduction motivates the goals of the course and basic concepts of models. It further explains why computational models are useful to understand the brain and why cognitive computational models can lead to a new approach in modeling truly intelligent agents.

The styles of computation used by biological systems are fundamentally different from those used by conventional computers: biological neural networks process information using energy-efficient asynchronous, event-driven, methods. They learn from their interactions with the environment, and can flexibly produce complex behaviors. These biological abilities yield a potentially attractive alternative to conventional computing strategies.

Neurokognition I provides an introduction to computational modelling at the neural level. Starting with basic properties of electrical circuits different formal neuron models are introduced. The second focus is devoted to learning and plasticity, where different learning rules are introduced and discussed. Finally different network mechanisms are introduced laying the ground for complex large-scale models of the brain.


MATLAB Introduction

As the programming language MATLAB will be used in all exercises, it will be introduced in the first two exercises. For further organizational details, see the link above.

Part I Model neurons

Part I describes the basic computational elements in the brain, the neurons. A neuron is already an extremely complicated device. Thus, models at different abstraction levels are introduced.

1.1 The neuron as an electrical circuit

Biophysical models of single cells are constructed using electrical circuits composed of resistors, capacitors, and voltage and current sources. This lecture introduces these basic elements.

1.2 Integrate- and fire models
Appendix: Numerical Integration

Integrate- and fire models model the membrane potential with a ODE, but do not explicitly model the generation of an action potential. The generate an action potential when the membrane potential reaches a particular threshold.

Exercise I.1: Integrate-and-fire model (2 units),
Files: iandf.m, iandf2.m, makeie.m, normal.m,
Solution, slides, Euler's method
Lösung als Spreadsheet(ods)
Lösung als Spreadsheet(Excel)
Numerical Solution of simple ODE and the h tau interaction
Stepwise solution of first integrate and fire neuron

1.3 Membrane equation

The membrane potential controls a vast number of nonlinear gates and can vary rapidly over large distances. The equilibrium potential is the result of electrostatic forces and diffusion by thermal energy and can be described by the Nernst and Goldman-Hodgkin-Katz equations.

1.4 Hodgkin and Huxley model

The Hodgkin-Huxley (H-H) theory of the action potential, formulated 60 years ago, ranks among the most significant conceptual breakthroughs in neuroscience (see Häusser, 2000). We here provide a full explanation of this model and its properties.

Exercise I.2: Hodgkin and Huxley model (3 units), Files: hhsim_v3.1.zip
Solution

1.5 Beyond Hodgkin and Huxley

The Hodgkin-Huxley model captures important principles of neural action potentials. However, since then several important discoveries have been made. This lecture presents of few important ones including more detailed models.

1.6 Synapses

The synapse is remarkably complex and involves many simultaneous processes. This lecture explains how to model different (AMPA, NMDA, GABA) synapses and include them in the integrate-and-fire model.

Exercise I.3: Synapses (1 unit), Files: Initial file
Solution

1.7 Hybrid spiking models

What is now a good model for large-scale computations using thousands or millions of interconnected neurons? Whereas integrate-and-fire neurons do not capture sufficiently well the firing patterns of real neurons, Hodgkin and Huxley type neurons are computationally very demanding. Izhikevich (2003) developed a model with only a few parameters that can capture sufficiently well firing patterns of real neurons. Based on this model and others, Brette and Gerstner (2005) developed an adaptive exponential integrate-and-fire model and compared it to a Hodgkin and Huxley type model. The Izhikevich and the adaptive exponential integrate-and-fire models are good candidates for large-scale models with realistic neural dynamics and firing patterns. See also the review of Izhikevich (2010).

Additional Reading: Izhikevich, E.M. (2004) Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks, 15:1063-1070.

1.8 The rate code

As explained in previous lectures the basic transfer from one neuron to another set of neurons relies on action potentials (spikes). However, for several phenomena is it sufficient to model just the spike rate of neurons and not the individual spikes. This lecture introduces to basic rate code models.

1.9 Phase Plane Models and Stability Analysis

As outlined above, single neurons but also coupled neurons can be well described as a dynamical system. Thus, all concepts of dynamical systems can be applied to neural models. Here I give a brief introduction into concepts of stability analysis and explain terms such as birfucation, limit cycles and phase-plane methods.

Exercise I.4: Hybrid spiking models (1 unit), Files: izhikevich.m, izhikevich.jpg
Solution

1.10 Poisson neurons

Although it is generally agreed that neurons signal information through sequences of action potentials, the neural code by which information is transferred through the cortex remains elusive. In the cortex, the timing of successive action potentials is highly irregular. This lecture explains under which conditions such firing pattern can be modeled by a poisson process.

Additional Reading: Shadlen MN, Newsome WT. (1994) Noise, neural codes and cortical organization. Curr Opin Neurobiol. 4:569-79.


Part II Learning

2.1 Neural principles

Learning is a fundamental property of the brain (see Markram, Gerstner, Sjöströma, 2011). This lecture introduces into the most important concepts of synaptic plasticity.

2.2 Rate-based learning rules

Early concepts of modeling learning have been described in form of rate-based learning rules. This chapter describes the most important ones such as Hebbian, Oja and BCM learning.

2.3 Homeostatic Plasticity

Hebbian plasticity is probably insufficient to explain activity-dependent development in neural networks, because it tends to destabilize the activity of neural circuits. We here discuss how circuits can maintain stable activity states by mechanisms of homeostatic plasticity.

Learning A (1 units), Files: SingleNeuronExercise.m

Exercise II.1: Hebbian Learning with a single neuron, Files: Ex_2_1_Learning_A.zip
Solution

Exercise II.2: Learning with multiple neurons, Files: Ex_2_2_Learning_B.zip
Solution

2.4 Spike-time dependent learning rules

Hebbian learning in rate coded networks do not capture effects of neural plasticity that depend on the exact timing of pre- and postsynaptic action potentials. We introduce spike-time dependent plasticity rules and discuss how well they account for recent data. In particular, it has been observed that simple spike-time dependent plasticity rules to not account well for learning in the presence of multiple pre- and postsynaptic action potentials and for firing rate dependent effects of plasticity. Finally, the lecture provides an overview of a novel learning rule that accounts for much data while keeping the rule simple (see Clopath, Büsing, Vasilaki, and Gerstner, 2010 as well as Clopath, and Gerstner, 2010 for details).

Exercise II.3: Learning with voltage-based STDP (1 unit), Files: Exercise_2_3.zip
Solution

2.5 Synaptic Tagging

Changes in synaptic efficacies need to be long-lasting in order to serve as a substrate for memory. Eligibility traces can help to solve the distal reward learning problem (Izhikevich, 2007). Experimentally, synaptic plasticity exhibits phases covering the induction of long-term potentiation and depression (LTP/LTD) during the early phase of synaptic plasticity, the setting of synaptic tags, a trigger process for protein synthesis, and a slow transition leading to synaptic consolidation during the late phase of synaptic plasticity. We here discuss a mathematical model of Clopath et al. (PLoS Computational Biolology, 2008) that describes these different phases of synaptic plasticity.

Exercise II.4: Learning - Synaptic Tagging, Files: Exercise_2_4.zip


Part III Networks

3.1 Lateral and feedback circuits

Important properties in the brain emerge from the interactions between neurons. This lecture provides on overview of properties that appear in local, lateral and feedback circuits, in particular Dynamic Neural Fields and Balanced Excitation and Inhibition Networks.

Additional Reading:
Grossberg, S (1988) Nonlinear Neural Networks: Principles, Mechanisms, and Architectures. Neural Networks, 1:17-61. (only part 1-15)

-->

3.2 Coordinate Transformations

Space can be represented in different coordinate frames, such as eye-centered for early visual processing and head-centered for acoustic stimuli. It appears that the brain relies on different coordinate systems to represent the space in our external environment. Thus, this lecture explains how stimuli can be transformed from one coordinate system to another.

Exercise III.1: Radial Basis Function Networks (1 unit), Files: Exercise_rbf.zip
Solution

Exercise III.2: ANNarchy - a neural network architect (1 unit), Files: Exercise_ANN.tar

Suggested Reading:
Cohen & Andersen (2002). Nature Rev Neurosci 3:553-562.
Pouget, A., and Snyder, L. (2000) Computational approaches to sensorimotor transformations. Nature Neuroscience. 3:1192-1198.
Deneve S, Latham PE, Pouget A. (2001) Efficient computation and cue integration with noisy population codes. Nat Neurosci., 4:826-31.
Pouget A, Deneve S, Duhamel JR. (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci., 3:741-7.
Salinas E, Abbott LF (1996) A model of multiplicative neural responses in parietal cortex. Proc Natl Acad Sci U S A, 93:11956-61.


Part IV Mechanisms

4.1 Cortical magnification

In the cortex, the visual space is overrepresented in the fovea, which means that much more space in cortex is devoted to compute information around the center of visual space. A cortical magnification function allows to model the relation between visual and cortical space providing a method to account for the overrepresentation in models of visual perception.

Suggested Reading:
Rovamo J, Virsu V (1983) Isotropy of cortical magnification and topography of striate cortex. Vision Res 24: 283-286.


Part V Neural Simulators

5.1 Overview of neural simulators

This lecture provides an overview about neural simulators.

Presseartikel

  • Neurologisch vernetzt

    Das Smart Start-Programm ermöglicht Masterstudenten ab September einen Einblick in Labore in ganz Deutschland – Auch TU-Student Alex Schwarz profitiert davon …

  • Wissbegierige Schüler weiter auf Erfolgskurs

    Zwei von der TU betreute Gymnasiasten errangen am 29. März 2014 Fachgebietssiege beim sächsischen Landeswettbewerb "Jugend forscht" und lösten damit ihre Fahrkarten zum Bundesfinale …