Springe zum Hauptinhalt

Professur Künstliche Intelligenz

Neurokognition II


Sommersemester 2020

Vorlesung: Dienstag, 13.30 - 15.00, 1/153 (Prof. F. Hamker)
Übung (Englisch): Montag, 17.15 - 18.45, 1/B202 (Prof. F. Hamker, V. Forch)
Übung: Donnerstag, 11.30 - 13.00, 1/B202 (Prof. F. Hamker, V. Forch)


Registration on OPAL:


Die Veranstaltung führt in die Modellierung neurokognitiver Vorgänge des Gehirns ein. Neurokognition ist ein Forschungsfeld, welches an der Schnittstelle zwischen Psychologie, Neurowissenschaft, Informatik und Physik angesiedelt ist. Es dient zum Verständnis des Gehirns auf der einen Seite und der Entwicklung intelligenter adaptiver Systeme auf der anderen Seite. Die Neurokognition II beleuchtet komplexere Modelle von Neuro-psychologischen Prozessen, mit dem Ziel neue Algorithmen für intelligente, kognitive Roboter zu entwickeln. Themen sind Wahrnehmung, Gedächtnis, Handlungskontrolle, Emotionen, Entscheidungen und Raumwahrnehmung. Zum tieferen Verständnis erfordern die Übungen auch praktische Aufgaben am Rechner.


Empfohlene Voraussetzungen: Grundkenntnisse Mathematik I bis IV, Neurokognition I

Prüfung: Mündliche Prüfung

Ziele: Fachspezifische Kenntnisse der Neurokognition


Part I Introduction

The introduction motivates the goals of the course and basic concepts of models. It further explains why computational models are useful to understand the brain and why cognitive computational models can lead to a new approach in modeling truly intelligent agents.

The styles of computation used by biological systems are fundamentally different from those used by conventional computers: biological neural networks process information using energy-efficient asynchronous, event-driven, methods. They learn from their interactions with the environment, and can flexibly produce complex behaviors. These biological abilities yield a potentially attractive alternative to conventional computing strategies.

Neurokognition II is particularly devoted to model perception cognition and behavior in large-scale neural networks. The course introduces models of early vision, attention, object recognition, space perception, cognitive control, memory, emotion and consciousness.

Part II Early Vision

Perhaps our most important sensory information about our environment is vision. The lecture "early vision" explains the first processing steps of visual perception.

Adelson, E. H., Bergen, J R. (1985): Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A. 2:284-299
DeAngelis, G., Ohzawa, I., Freeman, R.D. (1995): Receptive-field dynamics in the central visual pathways. TINS Vol. 18, No. 10, 1995

Vision starts in the retina, which is considered part of the brain. The lecture explains the concept of a receptive field and introduces simple models of early processing that model dynamic receptive fields.

Additional Reading:
Cai D, DeAngelis GC, and Freeman RD (1997) Spatiotemporal receptive field organization in the LGN of cats and kittens. J Neurophysiol 78:1045-1061.

Shape perception refers to the fact that the visual system has filters that respond optimally to oriented bars or edges, which takes place in area V1, also called striate cortex. This lecture introduces into the receptive fields of neurons in V1 and explains what kind of information V1 encodes with respect to shape perception.

Color perception starts in the retina, since we have receptors that are selective for different wavelength of the light. This lecture introduces into models of color selective receptive fields.

In the cortex, the visual space is overrepresented in the fovea, which means that much more space in cortex is devoted to compute information around the center of visual space. A cortical magnification function allows to model the relation between visual and cortical space providing a method to account for the overrepresentation in models of visual perception.

Suggested Reading: Rovamo J, Virsu V (1983) Isotropy of cortical magnification and topography of striate cortex. Vision Res 24: 283-286.

Motion perception begins already in area V1 by motion sensitive cells and then continues in the dorsal pathway in areas MT and MST.

Suggested Reading:Adelson, E. H., Bergen, J R. (1985): Spatiotemporal energy models for the perception of motion. Simoncelli, EP, Heeger, DJ (1998): A Model of Neuronal Responses in Visual Area MT. Vision Research, 38(5), 743-76.

Seeing in three dimensions requires to extract depth information from the visual scene. One method, called binocular disparity, is of primary focus.

Suggested Reading: Read JCA (2005) Early computational processing in binocular vision and depth perception. Progress in Biophysics and Molecular Biology 87:77-108.

Gain normalization appears to be a canonical neural computation in sensory systems and possibly also in other neural systems. Gain normalization is introduced and examples for normalization in retina, in primary visual cortex, in higher visual cortical areas and in non-visual cortical areas are given.

Suggested Reading: Carandini M., Heeger DJ. (2012) Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13:51-62.

Part III High-level Vision

High-level vision deals with questions of how we recognize objects or scenes and how we direct processing resources to particular aspects of visual scenes (visual attention).

Attention refers to mechanisms that allow the focusing of processing resources. Experimental observations, neural principles and system-level models of attention are described.

Suggested Reading:
Beuth, F., Hamker, F. H. (2015) A mechanistic cortical microcircuit of attention for amplification, normalization and suppression. Vision Research, 116:241-257.
Reynolds JH, Heeger DJ (2009) The normalization model of attention. Neuron 61: 168-185.

Part IV Cognition

Cognition deals with questions of how a system can learn and execute complex tasks and allow control over sensors and actions.


Exercise I.1: Tutorial on the neuro-simulator ANNarchy, Files:
Exercise II.1: Gabor filters, Files:
Exercise II.2: Depth perception, Files:, solution Part B
Exercise III.1: Object Recognition and HMAX, Files: ; Article: Serre, Wolf and Poggio (2004).
Exercise III.2: Normalization model of attention, Files: solution Part B.
Exercise III.3: Visual attention and experimental data, Files: .
Exercise III.4: Space perception, Files:
Exercise IV.1: Basal ganglia, Files:
Exercise IV.2: Hippocampus, , Files: