Topic   Organizers   Target Audience   Schedule & Slides

 

High dimensional computing - the upside of the curse of dimensionality

HDC Tutorial

 

 


Monday, September 23, 14.00-17.30
Location: Campus Center Seminarraum 6

Slides are available: Schedule & Slides

 

 

In a nutshell

This tutorial at KI 2019 is about solving computational problems using calculations in vectorspaces with thousands of dimensions. We will discuss theoretical foundations and applications, e.g., analogy mapping or how to recognize places from images of a 2800 km trip through Norway across different seasons.

 

Hands-on part

There will be a hands-on part where we demonstrate the application to mobile robot place recognition from scratch using Matlab. Everybody is invited to just enjoy the presentation or to simultaneously code this part either in Matlab or any other suitable programming language that can work with large numerical vectors (e.g. Python).

 

Prepared Matlab code and data: zip

Code and data for other languages: zip

Topic

Humans typically gain an intuitive understanding of 2d and 3d Euclidean spaces very early in their lives. Higher dimensional spaces have some counterintuitive properties that render the generalization of many algorithms from low to high dimensional spaces useless - a phenomenon known as curse of dimensionality. However, there is a whole class of approaches that aims at exploiting these properties, e.g., the fact that two random high dimensional vectors are very likely almost orthogonal. These approaches work in vector spaces with thousands of dimensions and are referred to as high dimensional computing, Vector Symbolic Architectures (VSA) or hypervector computing (to emphasize the really large number of dimensions). They have interesting theoretical properties (e.g. to answer Jackendoff's challenges [1,2]) and can be applied to a wide range of tasks like analogy mapping [3], approximate inference [4], or modeling brains [5,6].

 

The term “symbolic” in VSA highlights the approach to do symbol manipulation with numerical vectors and to bridge the gap between subsymbolic and symbolic processing. Distributed high dimensional representations are exceptionally robust towards noise, an omnipresent problem when working with real world sensor data, e.g. in robotics. In the long term, their noise robustness can also allow to use very power efficient stochastic devices that are prone to bit errors but extend the battery live of mobile devices. VSAs can combine manual design and learning (both, one shot learning from a single example as well as statistical learning), a currently actively explored field. Kanerva [7] also (theoretically) suggested a “High dimensional computing Lisp” to address general computation tasks.

 

In a nutshell, the topic of the tutorial is an introduction to properties of high dimensional vector spaces and particular operators that can be used to solve computational problems.

 

A famous example for high dimensional computing is Kanerva’s “What is the Dollar of Mexico?”:

Let’s assume we have a hypervector U (e.g. from vector space V = {0,1}4,000) representing the record for the USA (a set of role-filler pairs) :

 

        U   =   (NameUSA)   ⊕   (CapitalWashingtonDC)   ⊕   (MonetaryDollar)

 

Each italic word is a random high dimensional vector (in the above example: 4,000 dimensional random bitstrings), ⊗ and ⊕ are algebraic operators (V×V→V) on these vectors and they are used to create the combined vector U. Also, we have a second hypervector M representing our knowledge about Mexico:

 

 M   =   (NameMexico)   ⊕   (CapitalMexicoCity)   ⊕   (MonetaryPeso)

 

Using high dimensional computing with carefully chosen operators for and ⊕, a question like “What is the Dollar of Mexico?” can now be calculated by (U ⊗ M) ⊗ Dollar Peso.

 

This topic is closely related to vector models for natural language processing and neural-symbolic learning and reasoning. However, the background of the organizers is in the area of mobile robotics, so the applications will be from this area and there will be some emphasis on the combination with real world sensor data.

 

 

 

  1. [1]Jackendoff, R.: Foundations of Language (Brain, Meaning, Grammar, Evolution). Oxford University Press (2002) 

  2. [2]Gayler, R.W.: Vector symbolic architectures answer jackendoff’s challenges for cognitive neuroscience. In: Proc. of ICCS/ASCS Int. Conf. on Cognitive Science, pp. 133–138. Sydney, Australia (2003)  

  3. [3]Kanerva, P.: Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation 1(2), 139–159 (2009) 

  4. [4]Widdows, D., Cohen, T.: Reasoning with vectors: A continuous model for fast robust inference. Logic journal of the IGPL / Interest Group in Pure and Applied Logics (2), 141173 (2015) 

  5. [5]Eliasmith, C., Stewart, T.C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., Rasmussen, D.: A large-scale model of the functioning brain. Science 338(6111), 1202–1205 (2012). DOI 10.1126/science.1225266.  

  6. [6]Hawkins, J., Ahmad, S.: Why neurons have thousands of synapses, a theory of sequence memory in neocor tex. Frontiers in Neural Circuits 10, 23 (2016). DOI 10.3389/fncir.2016.00023 

  7. [7]Kanerva, P.: Computing with 10,000-bit words. In: 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 304–310 (2014). DOI 10.1109/ALLERTON.2014.7028470 

 

Target Audience

 

Target audience are researchers at any point in their career who are interested in the properties of high dimensional vector spaces. We believe, a better understanding of properties of these vectors spaces can help researchers in various subfields of AI - either because they suffer from their drawbacks (the curse of dimensionality) or because there are ways to benefit from their special properties.

The tutorial is an introduction to the topic and possible applications. No prior knowledge on high dimensional computing is required. A basic understanding of linear algebra and probability theory is helpful (undergraduate student level).

Presented applications and demonstrations are on mobile robotics tasks. Each task is introduced and no prior knowledge on these tasks or the area of mobile robotics is required.

 

Schedule & Slides

The primary goal of this tutorial is to provide an easy to access introduction to the topic to sensitize researchers for the special properties of high dimensional vector spaces, to widespread the application of high dimensional computing and to inspire new applications. We want to provide a theoretical basis in a first lecture part and enable the participants to solve simple problems using high dimensional computing in a second hands-on part. The use-cases and demonstrations will be mostly in the area of mobile robotics and include real world data, e.g. to recognize places from images of a 2800 km trip through Norway across different seasons.

 

14:00 Welcome

14:05 Introduction to high dimensional computing (pdf)

15:05 Implementations in form of Vector Symbolic Architectures (pdf)

15:30 Coffee break

16:00 Vector encodings of real world data  (pdf)

16:30 Application to robotic tasks  (pdf and pdf)

17:15 Discussion and conclusion   (pdf)

 

 

Organizers

 

Peer Neubert, Technische Universität Chemnitz

Stefan Schubert,  Technische Universität Chemnitz

Kenny Schlegel, Technische Universität Chemnitz

 


For questions please contact peer.neubert@etit.tu-chemnitz.de

 

 

 

 

 

 

 

 

Technische Universität Chemnitz, 09107 Chemnitz - Impressum - Copyright © 2019 by TU Chemnitz. Alle Rechte vorbehalten.