|Vorlesung:||Dienstag,||13.30 - 15.00, 1/153||(Prof. F. Hamker)|
|Übung (Englisch):||Montag,||17.15 - 18.45, 1/B202||(Prof. F. Hamker, V. Forch)|
|Übung:||Donnerstag,||11.30 - 13.00, 1/B202||(Prof. F. Hamker, V. Forch)|
Registration on OPAL: https://bildungsportal.sachsen.de/opal/auth/RepositoryEntry/23116349505.
Die Veranstaltung führt in die Modellierung neurokognitiver Vorgänge des Gehirns ein. Neurokognition ist ein Forschungsfeld, welches an der Schnittstelle zwischen Psychologie, Neurowissenschaft, Informatik und Physik angesiedelt ist. Es dient zum Verständnis des Gehirns auf der einen Seite und der Entwicklung intelligenter adaptiver Systeme auf der anderen Seite. Die Neurokognition II beleuchtet komplexere Modelle von Neuro-psychologischen Prozessen, mit dem Ziel neue Algorithmen für intelligente, kognitive Roboter zu entwickeln. Themen sind Wahrnehmung, Gedächtnis, Handlungskontrolle, Emotionen, Entscheidungen und Raumwahrnehmung. Zum tieferen Verständnis erfordern die Übungen auch praktische Aufgaben am Rechner.
Empfohlene Voraussetzungen: Grundkenntnisse Mathematik I bis IV, Neurokognition I
Prüfung: Mündliche Prüfung
Ziele: Fachspezifische Kenntnisse der Neurokognition
Part I Introduction
The introduction motivates the goals of the course and basic concepts of models. It further explains why computational models are useful to understand the brain and why cognitive computational models can lead to a new approach in modeling truly intelligent agents.
The styles of computation used by biological systems are fundamentally different from those used by conventional computers: biological neural networks process information using energy-efficient asynchronous, event-driven, methods. They learn from their interactions with the environment, and can flexibly produce complex behaviors. These biological abilities yield a potentially attractive alternative to conventional computing strategies.
Neurokognition II is particularly devoted to model perception cognition and behavior in large-scale neural networks. The course introduces models of early vision, attention, object recognition, space perception, cognitive control, memory, emotion and consciousness.
Part II Early Vision
Perhaps our most important sensory information about our environment is vision. The lecture "early vision" explains the first processing steps of visual perception.Overview:
DeAngelis, G., Ohzawa, I., Freeman, R.D. (1995): Receptive-field dynamics in the central visual pathways. TINS Vol. 18, No. 10, 1995
Vision starts in the retina, which is considered part of the brain. The lecture explains the concept of a receptive field and introduces simple models of early processing that model dynamic receptive fields.Additional Reading:
Shape perception refers to the fact that the visual system has filters that respond optimally to oriented bars or edges, which takes place in area V1, also called striate cortex. This lecture introduces into the receptive fields of neurons in V1 and explains what kind of information V1 encodes with respect to shape perception.
Color perception starts in the retina, since we have receptors that are selective for different wavelength of the light. This lecture introduces into models of color selective receptive fields.
In the cortex, the visual space is overrepresented in the fovea, which means that much more space in cortex is devoted to compute information around the center of visual space. A cortical magnification function allows to model the relation between visual and cortical space providing a method to account for the overrepresentation in models of visual perception.
Motion perception begins already in area V1 by motion sensitive cells and then continues in the dorsal pathway in areas MT and MST.
Seeing in three dimensions requires to extract depth information from the visual scene. One method, called binocular disparity, is of primary focus.
Gain normalization appears to be a canonical neural computation in sensory systems and possibly also in other neural systems. Gain normalization is introduced and examples for normalization in retina, in primary visual cortex, in higher visual cortical areas and in non-visual cortical areas are given.
Why does the brain develop a particular set of feature detectors for early vision. This lecture addresses how approaches that rely on learning allow to better understand the coding of vision in the brain.
Olshausen and Field (2000): Vision and the Coding of Natural Images. American Scientist. 88:238-245
Simoncelli, E.P.: Vision and the statistics of the visual environment. Current Opinion in Neurobiology 2003, 13:144-149.
Part III High-level Vision
High-level vision deals with questions of how we recognize objects or scenes and how we direct processing resources to particular aspects of visual scenes (visual attention).
Object recognition appears to be solved by a hierarchically organized system that progressively increases the complexity and invariance of feature detectors.
Serre, T, Wolf, L, Bileschi, S, Riesenhuber, M, Poggio, T (2007) Object recognition with cortex-like mechanisms. In: IEEE Transactions on Pattern Analysis and MachineIntelligence, 29:411-426.
Attention refers to mechanisms that allow the focusing of processing resources. Experimental observations, neural principles and system-level models of attention are described.
Reynolds JH, Heeger DJ (2009) The normalization model of attention. Neuron 61: 168-185.
The perception of space is very crucial for systems that interact with the world. This lecture introduces to anatomical pathways of space perception. The primary focus is then directed to the problem of "Visual Stability", which deals with the question of why we perceive a stable environment regardless that each eye movement changes the content on the retina.
Hamker, F. H., Zirnsak, M., Ziesche, A., Lappe, M. (2011) Computational models of spatial updating in peri-saccadic perception. Phil. Trans. R. Soc. B (2011), 366: 554-571.
Husain, M., Nachev, P. (2006) Space and the parietal cortex. Trends in Cognitive Sciences, 11:30-36.
Part IV Cognition
Cognition deals with questions of how a system can learn and execute complex tasks and allow control over sensors and actions.
Wiecki, T.V., Frank, M.J. (2010) Neurocomputational models of motor and cognitive deficits in Parkinson's disease. Prog. Brain Res. 183:275-297.
|Exercise I.1: Tutorial on the neuro-simulator ANNarchy, Files: exerciseI.1.zip.|
| Exercise II.1: Gabor filters, Files: exerciseII.1.zip|
|Exercise II.2: Depth perception, Files: exerciseII.2.zip, solution Part B|
|Exercise III.1: Object Recognition and HMAX, Files: exerciseIII.1.zip ; Article: Serre, Wolf and Poggio (2004).|
|Exercise III.2: Normalization model of attention, Files: exerciseIII.2.zip solution Part B.|
|Exercise III.3: Visual attention and experimental data, Files: exerciseIII.3.zip .|
|Exercise III.4: Space perception, Files: exerciseIII.4.zip|
|Exercise IV.1: Basal ganglia, Files: exerciseIV.1.zip|
|Exercise IV.2: Hippocampus, , Files: exerciseIV.2.zip|