Springe zum Hauptinhalt

Professur Künstliche Intelligenz

Neurokognition I

WS2019

Prüfungstermine

Die Prüfungen zu Neurokognition I und II finden zu folgenden Terminen statt:
19. Februar, 2. März, 16. März 2020,
jeweils 10.00 - 12.00 Uhr und 13.00 - 16.00 Uhr.

Die mündlichen Prüfungen Neurokognition I am 16.03.2020 und 17.03.2020 finden regulär statt

Anmeldungen bitte direkt an Frau Susan Köhler, susan.koehler@informatik.tu-chemnitz.de.

Inhalte

Die Veranstaltung führt in die Modellierung neurokognitiver Vorgänge des Gehirns ein. Neurokognition ist ein Forschungsfeld, welches an der Schnittstelle zwischen Psychologie, Neurowissenschaft, Informatik und Physik angesiedelt ist. Es dient zum Verständnis des Gehirns auf der einen Seite und der Entwicklung intelligenter adaptiver Systeme auf der anderen Seite. In Neurokognition I werden vorwiegend realistische neuronale Modelle und Netzwerkeigenschaften sowie das Lernen vorgestellt. In den Übungen werden die Algorithmen der Vorlesung mittels einer Implementierung in MATLAB vertieft. Kennnisse in MATLAB sind keine Voraussetzung für die Teilnahme, sie können bei Bedarf vor dem Beginn erworben werden. Dafür werden ein Skript zum Selbststudium sowie zwei praktische Übungen in den ersten Vorlesungswochen angeboten.

Ziele: Dieses Modul vermittelt theoretische Konzepte und praktische Erfahrungen zur Entwicklung von Neuro-kognitiven Modellen.

Randbedingungen

Empfohlene Voraussetzungen: Grundkenntnisse Mathematik I bis IV

Prüfung: Mündliche Prüfung, 5 Leistungspunkte

Literatur

Dayan, P. & Abbott, L., Theoretical Neuroscience, MIT Press, 2001.

Übungen

Anmeldung vor der ersten Übung : https://bildungsportal.sachsen.de/opal/auth/RepositoryEntry/18086756352


Syllabus

Introduction

The introduction motivates the goals of the course and basic concepts of models. It further explains why computational models are useful to understand the brain and why cognitive computational models can lead to a new approach in modeling truly intelligent agents.

The styles of computation used by biological systems are fundamentally different from those used by conventional computers: biological neural networks process information using energy-efficient asynchronous, event-driven, methods. They learn from their interactions with the environment, and can flexibly produce complex behaviors. These biological abilities yield a potentially attractive alternative to conventional computing strategies.

Neurokognition I provides an introduction to computational modelling at the neural level. Starting with basic properties of electrical circuits different formal neuron models are introduced. The second focus is devoted to learning and plasticity, where different learning rules are introduced and discussed. Finally different network mechanisms are introduced laying the ground for complex large-scale models of the brain.


MATLAB Introduction

As the programming language MATLAB will be used in all exercises, it will be introduced in the first exercise. For an in-depth introduction, see this Skript.

Part I Model neurons

Part I describes the basic computational elements in the brain, the neurons. A neuron is already an extremely complicated device. Thus, models at different abstraction levels are introduced.

1.1 The neuron as an electrical circuit

Biophysical models of single cells are constructed using electrical circuits composed of resistors, capacitors, and voltage and current sources. This lecture introduces these basic elements.

Preparatory Exercise: Euler's method
Numerical Solution of simple ODE and the h tau interaction
Stepwise solution of first integrate and fire neuron
Presentation in english, german

1.2 Integrate- and fire models
Appendix: Numerical Integration

Integrate- and fire models model the membrane potential with a ODE, but do not explicitly model the generation of an action potential. The generate an action potential when the membrane potential reaches a particular threshold.

Exercise 1: Integrate-and-fire model (2 units), Files: NKI_01.zip
1.3 Membrane equation

The membrane potential controls a vast number of nonlinear gates and can vary rapidly over large distances. The equilibrium potential is the result of electrostatic forces and diffusion by thermal energy and can be described by the Nernst and Goldman-Hodgkin-Katz equations.

1.4 Hodgkin and Huxley model

The Hodgkin-Huxley (H-H) theory of the action potential, formulated 60 years ago, ranks among the most significant conceptual breakthroughs in neuroscience (see Häusser, 2000). We here provide a full explanation of this model and its properties.

Exercise 2: Hodgkin and Huxley model (3 units), Files: hhsim_36.zip
1.5 Beyond Hodgkin and Huxley

The Hodgkin-Huxley model captures important principles of neural action potentials. However, since then several important discoveries have been made. This lecture presents of few important ones including more detailed models.

1.6 Synapses

The synapse is remarkably complex and involves many simultaneous processes. This lecture explains how to model different (AMPA, NMDA, GABA) synapses and include them in the integrate-and-fire model.

Exercise 3: Synapses (1 unit), Files: Initial file
1.7 Hybrid spiking models

What is now a good model for large-scale computations using thousands or millions of interconnected neurons? Whereas integrate-and-fire neurons do not capture sufficiently well the firing patterns of real neurons, Hodgkin and Huxley type neurons are computationally very demanding. Izhikevich (2003) developed a model with only a few parameters that can capture sufficiently well firing patterns of real neurons. Based on this model and others, Brette and Gerstner (2005) developed an adaptive exponential integrate-and-fire model and compared it to a Hodgkin and Huxley type model. The Izhikevich and the adaptive exponential integrate-and-fire models are good candidates for large-scale models with realistic neural dynamics and firing patterns. See also the review of Izhikevich (2010).

Additional Reading: Izhikevich, E.M. (2004) Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks, 15:1063-1070.
Exercise 4: Hybrid Spiking Models (1 unit), Files: Initial file, Spiking Patterns
1.8 The rate code

As explained in previous lectures the basic transfer from one neuron to another set of neurons relies on action potentials (spikes). However, for several phenomena is it sufficient to model just the spike rate of neurons and not the individual spikes. This lecture introduces to basic rate code models.

1.9 Phase Plane Models and Stability Analysis

As outlined above, single neurons but also coupled neurons can be well described as a dynamical system. Thus, all concepts of dynamical systems can be applied to neural models. Here I give a brief introduction into concepts of stability analysis and explain terms such as birfucation, limit cycles and phase-plane methods.

1.10 Poisson neurons

Although it is generally agreed that neurons signal information through sequences of action potentials, the neural code by which information is transferred through the cortex remains elusive. In the cortex, the timing of successive action potentials is highly irregular. This lecture explains under which conditions such firing pattern can be modeled by a poisson process.

Additional Reading: Shadlen MN, Newsome WT. (1994) Noise, neural codes and cortical organization. Curr Opin Neurobiol. 4:569-79.


Part II Learning

2.1 Neural principles

Learning is a fundamental property of the brain (see Markram, Gerstner, Sjöströma, 2011). This lecture introduces into the most important concepts of synaptic plasticity.

2.2 Rate-based learning rules

Early concepts of modeling learning have been described in form of rate-based learning rules. This chapter describes the most important ones such as Hebbian, Oja and BCM learning.

2.3 Homeostatic Plasticity

Hebbian plasticity is probably insufficient to explain activity-dependent development in neural networks, because it tends to destabilize the activity of neural circuits. We here discuss how circuits can maintain stable activity states by mechanisms of homeostatic plasticity.

Exercise 5: Hebbian Learning with a single neuron (1 unit), Files: NKI_05.zip

Exercise 6: Learning with multiple neurons (1 unit), Files: NKI_06.zip
2.4 Spike-time dependent learning rules

Hebbian learning in rate coded networks do not capture effects of neural plasticity that depend on the exact timing of pre- and postsynaptic action potentials. We introduce spike-time dependent plasticity rules and discuss how well they account for recent data. In particular, it has been observed that simple spike-time dependent plasticity rules to not account well for learning in the presence of multiple pre- and postsynaptic action potentials and for firing rate dependent effects of plasticity. Finally, the lecture provides an overview of a novel learning rule that accounts for much data while keeping the rule simple (see Clopath, Büsing, Vasilaki, and Gerstner, 2010 as well as Clopath, and Gerstner, 2010 for details).

 

Exercise 7: Learning with voltage-based STDP (1 unit), Files: NKI_07.zip
2.5 Synaptic Tagging

Changes in synaptic efficacies need to be long-lasting in order to serve as a substrate for memory. Eligibility traces can help to solve the distal reward learning problem (Izhikevich, 2007). Experimentally, synaptic plasticity exhibits phases covering the induction of long-term potentiation and depression (LTP/LTD) during the early phase of synaptic plasticity, the setting of synaptic tags, a trigger process for protein synthesis, and a slow transition leading to synaptic consolidation during the late phase of synaptic plasticity. We here discuss a mathematical model of Clopath et al. (PLoS Computational Biolology, 2008) that describes these different phases of synaptic plasticity.

Exercise 8: Synaptic Tagging (1 unit), Files: NKI_08.zip

2.6 Learning Node-pertubation

 


Part III Networks

3.1 Lateral and feedback circuits

Important properties in the brain emerge from the interactions between neurons. This lecture provides on overview of properties that appear in local, lateral and feedback circuits, in particular Dynamic Neural Fields and Balanced Excitation and Inhibition Networks.

 

Additional Reading:
Grossberg, S (1988) Nonlinear Neural Networks: Principles, Mechanisms, and Architectures. Neural Networks, 1:17-61. (only part 1-15)

3.2 Coordinate Transformations

Space can be represented in different coordinate frames, such as eye-centered for early visual processing and head-centered for acoustic stimuli. It appears that the brain relies on different coordinate systems to represent the space in our external environment. Thus, this lecture explains how stimuli can be transformed from one coordinate system to another.

 

Exercise 9: Radial Basis Function Networks (1 unit), Files: NKI_09.zip

Suggested Reading:
Cohen & Andersen (2002). Nature Rev Neurosci 3:553-562.
Pouget, A., and Snyder, L. (2000) Computational approaches to sensorimotor transformations. Nature Neuroscience. 3:1192-1198.
Deneve S, Latham PE, Pouget A. (2001) Efficient computation and cue integration with noisy population codes. Nat Neurosci., 4:826-31.
Pouget A, Deneve S, Duhamel JR. (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci., 3:741-7.
Salinas E, Abbott LF (1996) A model of multiplicative neural responses in parietal cortex. Proc Natl Acad Sci U S A, 93:11956-61.


Part IV Mechanisms

4.1 Cortical magnification

In the cortex, the visual space is overrepresented in the fovea, which means that much more space in cortex is devoted to compute information around the center of visual space. A cortical magnification function allows to model the relation between visual and cortical space providing a method to account for the overrepresentation in models of visual perception.

Suggested Reading:
Rovamo J, Virsu V (1983) Isotropy of cortical magnification and topography of striate cortex. Vision Res 24: 283-286.


Part V Neural Simulators

5.1 Overview of neural simulators

This lecture provides an overview about neural simulators.