Jump to main content
Professur Digital- und Schaltungstechnik
Synthetic Data Generation using 3D Computer Graphics

Synthetic Data Generation using 3D Computer Graphics

Update 2020/03: We are pleased to inform you that THEODORE, a synthetic omnidirectional top-view indoor data set for deep transfer learning was accepted at WACV. More details about the data set are here.

For modern computer vision applications, neural networks are often the key to success. Such systems are trained with huge amounts of data with the aim to learn different variations of objects. Depending on the domain, it can happen that the required training quantities are not available and already existing data sets cannot be adapted. One approach to counteract this problem is the generation of synthetic data. In the context of computer vision applications, such data can be generated using 3D computer graphics. This usually has the advantage that the creation of large amounts of data is more cost-effective and efficient.

A work group of the Chair of Digital and Circuit Technology is investigating various methods for the generation of synthetic data for computer vision applications and how these can be used in the field of AAL (Ambient Assisted Living) or for autonomous driving. Another field of research is the optimization of synthetic data for realistic abstraction.

Fig 1: Example scenarios for synthetic indoor activities and autonomous driving

The video in figure 1 visualizes possible scenarios in the area of indoor activities and autonomous driving.

synthetically generated point cloud
Fig 2: synthetically generated point cloud

Figure 2 illustrates the generation of point clouds from synthetic data as it would be generated by a 3D camera system within an apartment. In addition to the two-dimensional camera images, point clouds provide spatial information and can thus support object detection and classification. Synthetically generated point clouds enrich real 3D data sets, which are needed to develop and train classification algorithms.

Current research emphasis and open topics are:

  • Generation of synthetic data using Unity3D, Unreal Engine and Blender

  • Domain adaptation of synthetic data using Generative Adversarial Networks and Style Transfer Methods

  • omnidirectional images

Publications

Title Author(s) Year
1 Unsupervised Domain Adaptation from Synthetic to Real Images for Anchorless Object Detection
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, 08.02.2021 - 10.02.2021, pages 319-327. - SCITEPRESS - Science and Technology Publications, 2021
Scheck, Tobias
Perez Grassi, Ana Cecilia
Hirtz, Gangolf
2021
2 A Study on the Influence of Omnidirectional Distortion on CNN-based Stereo Vision
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 08.02.2021 - 10.02.2021, Online-Konferenz, pp. 809-816. - Setúbal, Portugal : SCITEPRESS - Science and Technology Publications, 2021
Seuffert, Julian
Perez Grassi, Ana Cecilia
Scheck, Tobias
Hirtz, Gangolf
2021
3 Learning from THEODORE: A Synthetic Omnidirectional Top-View Indoor Dataset for Deep Transfer Learning
2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 1-5 March 2020, pp. 932-941. - IEEE, 2020
Scheck, Tobias
Seidel, Roman
Hirtz, Gangolf
2020
4 Where to drive: free space detection with one fisheye camera
Twelfth International Conference on Machine Vision (ICMV 2019), 16.11.2019 - 18.11.2019, pp. 777-786. - SPIE, 2020. - Volume : 11433
Scheck, Tobias
Mallandur, Adarsh
Wiede, Christian
Hirtz, Gangolf
2020