Topic  Target Audience   Schedule   Organizers 

 

An Introduction to Vector Symbolic Architectures and Hyperdimensional Computing

VSA Tutorial

 

 


A tutorial at 24th European Conference on Artificial Intelligence (ECAI) 2020

Saturday, August 29, 9.00-10.30 CEST

 

Link to Live Stream/Video

Slides (PDF)

Self-test (PDF)

Matlab code (ZIP)

 

 

In a nutshell

This tutorial is about solving computational problems using calculations in vector spaces with thousands of dimensions. We will outline the mathematical foundations, present available approaches in form of Vector Symbolic Architectures and show practical applications, e.g., on recognizing places from images of a 2800 km trip through Norway across different seasons.

 

The tutorial will provide a basic introdution to the topic, it partially builds upon the following two papers:

 

 

 

Topic

This tutorial provides an introduction to a class of approaches to address symbolic computational problems using vector calculations. They work in vector spaces with thousands of dimensions and are referred to as Vector Symbolic Architectures (VSA), high- (or hyper-) dimensional computing, or hypervector computing (to emphasize the really large number of dimensions). They are particularly interesting if the task at hand already includes high-dimensional representations (e.g., from deep neural networks), if we want to perform symbolic computations with subsymbolic entities, or if we need a high fault-tolerance in the representations or in the operations. High dimensional vector spaces have special and sometimes counter intuitive properties. A well-known example is the curse of dimensionality: the effect that some algorithms that work well in low dimensional spaces, fail in higher dimensional spaces.

 

However, there are further effects of high dimensionality, that can be exploited;  for example, random high-dimensional vectors are very likely almost orthogonal. This and other properties are used in carefully designed vector operations of VSAs. Symbols are encoded in very large vectors, much larger than would be required to just distinguish the symbols. VSAs use the additional space to introduce redundancy in the representations, usually combined with distributing information across many dimensions of the vector (e.g., there is no single bit that represents a particular property - hence a single error on this bit can not alter this property). Very importantly, measuring the distance between vectors allows to evaluate a fuzzy relation between the corresponding symbols. The operations in VSAs are mathematical operations that create, process and preserve the fuzziness of the representations in a systematic and useful way. For instance, an addition-like operator can overlay vectors and creates a representation that is similar to the overlaid vectors.

 

VSAs build on seminal works by Smolensky [1], Plate [2], and Kanerva [3]. The underlying high-dimensional distributed representations share properties with representations from biology and are used in several computational models of aspects of brains, e.g. [4,5]. There is a variety of available VSA implementations (e.g., see [6] for an overview) and a large number of applications (e.g., model superposition in neural networks [7], approximate inference [8], analogical reasoning [9], NLP [10], long-short term memory [11], robotics [12], …).

 

This tutorial will present this class of approaches to an interested audience with little or no prior knowledge of this topic. We will outline the underlying mathematical properties of high-dimensional vector spaces that are exploited in VSAs. There are several different VSA implementations available which differ in the underlying vector spaces (e.g., binary, real, or complex numbers, sparse or dense vectors) and the design of the operations (with potentially large effect on their properties). We will provide an overview of existing architectures and demonstrate their strengths and weaknesses based on a set of practical experiments. We will outline existing applications and present our own experiences on using VSAs in combination with real world sensor data in the area of mobile robotics, e.g., combinations with deep learning visual front-ends for recognizing places from images of a 2800 km trip through Norway across different seasons.

 

[1] Smolensky, P. (1990) Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence 46(1–2):159–216

[2] Kanerva, P. (2009): Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation 1(2), 139–159

[3] Plate TA (1994) Distributed representations and nested compositional structure. Ph.D. thesis, Toronto, Ont., Canada, Canada

[4] Hawkins, J., Ahmad, S.: Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits 10, 23 (2016). DOI 10.3389/fncir.2016.00023

[5] Eliasmith C, Stewart TC, Choo X, Bekolay T, DeWolf T, Tang Y, Rasmussen D (2012) A large-scale model of the functioning brain. Science 338(6111):1202–1205.

[6] Schlegel, K., Neubert, P., Protzel, P. (2020) A comparison of Vector Symbolic Architectures. CoRR, abs/2001.11797

[7] Cheung, B.,Terekhov, A., Chen, Y., Agrawal, P., Olshausen, B. (2019) Superposition of many models into one. NeurIPS

[8] Widdows, D., Cohen, T.: Reasoning with vectors: A continuous model for fast robust inference. Logic journal of the IGPL / Interest Group in Pure and Applied Logics (2), 141173 (2015)

[9] Kanerva, P.: Computing with 10,000-bit words. In: 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 304–310 (2014). DOI 10.1109/ALLERTON.2014.7028470

[10] Gayler, R.W.: Vector symbolic architectures answer jackendoff’s challenges for cognitive neuroscience. In: Proc. of ICCS/ASCS Int. Conf. on Cognitive Science, pp. 133–138. Sydney, Australia (2003)

[11] Danihelka I, Wayne G, Uria B, Kalchbrenner N, Graves A (2016) Associative long short-term memory. In: Balcan MF, Weinberger KQ (eds) Proceedings of the 33rd international conference on machine learning, proceedings of machine learning research, 48. PMLR, New York, pp 1986–1994.

[12] Neubert, P., Schubert, S., Protzel, P. (2019). A neurologically inspired sequence processing model for mobile robot place recognition. IEEE Robotics and Automation Letters and International Conference on Intelligent Robots and Systems (IROS)

 

 

Target Audience

We believe, a better understanding of properties of high dimensional vectors spaces can help researchers in various subfields of AI - either because they suffer from their drawbacks (the curse of dimensionality) or because there are ways to benefit from their special properties. Target audience are researchers at any point in their career who are interested in the properties of high dimensional vector spaces. The tutorial is an introduction to the topic and possible applications. No prior knowledge on high dimensional computing is required. A basic understanding of linear algebra and probability theory is helpful (undergraduate student level). Presented applications and demonstrations are on mobile robotics tasks. Each task is introduced and no prior knowledge on these tasks or the area of mobile robotics is required.

 

 

Schedule

The primary goal of this tutorial is to provide an easy to access introduction to the topic to sensitize researchers for the special properties of high dimensional vector spaces, to widespread the application of high dimensional computing and to inspire new applications. We want to provide a theoretical basis and enable the participants to solve simple problems using high dimensional computing. The use-cases and demonstrations will be mostly in the area of mobile robotics and include real world data, e.g. to recognize places from images of a 2800 km trip through Norway across different seasons.

 

Tentative schedule:

 

 

 

Organizers

 

Peer Neubert, Technische Universität Chemnitz

Kenny Schlegel, Technische Universität Chemnitz

Stefan Schubert,  Technische Universität Chemnitz


For questions please contact peer.neubert@etit.tu-chemnitz.de

 

 

 

 

 

 

 

 

Technische Universität Chemnitz, 09107 Chemnitz - Impressum - Copyright © 2020 by TU Chemnitz. Alle Rechte vorbehalten.