Navigation

Inhalt Hotkeys

Forschungsseminar

Forschungsseminar

Das Forschungsseminar ist eine Veranstaltung, die sich an interessierte Studenten des Hauptstudiums richtet (bzw. Master oder höheres Semester Bachelor). Andere Interessenten sind jedoch jederzeit herzlich willkommen! Die vortragenden Studenten und Mitarbeiter der Professur KI stellen aktuelle forschungsorientierte Themen vor. Vorträge werden in der Regel in Englisch gehalten. Das Seminar findet unregelmäßig im Raum 336 statt. Den genauen Termin einzelner Veranstaltungen entnehmen Sie bitte den Ankündigungen auf dieser Seite.

Informationen für Diplom- und Masterstudenten

Die im Studium enthaltenen Seminarvorträge (das "Hauptseminar" im Studiengang Diplom-IF/AIF bzw. das "Forschungsseminar" im Master) können ebenso im Rahmen dieser Veranstaltung durchgeführt werden. Beide Lehrveranstaltungen (Diplom-Hauptseminar und Master-Forschungsseminar) haben das Ziel, dass die Teilnehmer selbststängig forschungsrelevantes Wissen erarbeiten und es anschließend im Rahmen eines Vortrages präsentieren. Thematisch behandeln die Seminare das Gebiet der Künstlichen Intelligenz, wobei der Schwerpunkt auf Objekterkennung, Neurocomputing auf Grafikkarten und Multi-Core Rechnern, Reinforcement Lernen, sowie intelligente Agenten in Virtueller Realität liegt. Andere Themenvorschläge sind aber ebenso herzlich willkommen!
Das Seminar wird nach individueller Absprache durchgeführt. Interessierte Studenten können unverbindlich Prof. Hamker kontaktieren, wenn sie wenn sie ein Interesse haben, bei uns eine der beiden Seminarveranstaltungen abzulegen.

Kommende Veranstaltungen

Disentangling representations of grouped observations in adversarial autoencoders

Felix Pfeiffer

Wed, 14. 11. 2018, 11:30, Room 131

Being able to classify the shown emotion or facial action from mere pictures of faces is a challenging task in machine learning, since simple classification requires at least reliably labeled data, which is hard to get in sufficient quantity. Unsupervised learning methods can at least in part avoid the problem of dependency from such data, by finding representations that are meaningful. In my thesis I present an algorithm that teaches an Adversarial Autoencoder how to find representations of data. With clever administration of the training process it is possible to strip information from the representation that would not be beneficial for specific tasks like classification. This process is called disentangling and the administrative strategy is to find groups of data. I will show the results of some experiments that verify that the algorithm does what it promises and elaborate on where its weaknesses may be, by training an Adversarial Autoencoder on a colorful MNIST dataset and let it produce disentangled representations that separate style from content.

Model Uncertainty estimation for a semantic segmentation network with a real time network deployment analysis on Nvidia Drive PX2 for Autonomous Vehicles

Abhishek Vivekanandan

Mon, 19. 11. 2018, 12:00, Room TBA

Autonomous vehicles require a high degree of perception capabilities in order to perceive the environment and predict objects therein at a high precision in real time. For such cases we use semantic segmentation networks. A major challenge in using semantic segmentation is determining how confident the network is in its prediction or in other words how trustworthy classification outcomes are. Integrating uncertainty estimates with semantic segmentation help us to understand the confidence measure with which a network predicts its output. Bayesian approaches along with dropouts provide us the necessary tool in deep learning to extract the uncertainty involved in the prediction from a model. In Bayesian Neural Networks, we place a distribution over the weights, giving us a probabilistic interpretation about the classification. For such networks, multiple Monte Carlo sampling is needed to generate a reliable posterior distribution from which we can infer uncertainty statistics. The serial nature of this sampling approach restricts its use in the real time environment. In this work through in-depth analysis we show the best possible places in a neural network to deploy dropouts along with the number of MC sampling which needs to be done such that we can maximize the quantifications to estimate uncertainty. We also exploit parallel capabilities of GPU to realize certain neural operations such as convolution and dropouts directly on an embedded hardware with minimal abstraction. As a result we propose the necessary alternative changes to the kernel functions needed to implement parallel Monte Carlo dropout sampling to estimate uncertainty in real-time. Finally, we provide a brief comparison in terms of benchmarking about the kernel implementations on a CPU (Intel Xeon processor) and a GPU (DrivePX2 and Nvidia Geforce 1080Ti).

Vergangene Veranstaltungen

Interpreting deep neural network-based models for automotive diagnostics

Ria Armitha

Wed, 7. 11. 2018, Room 131

With the breakthrough of Artificial intelligence over the last few decades and extensive improvements in Deep Learning methodologies, the field of Deep Learning has gone through major changes. AI has outdone humans in computing complex tasks like object and image recognition, fault detection in vehicles, speech recognition, medical diagnosis etc. From a bird's-eye view the models are basically algorithms which try to learn concealed patterns and relationships from the data fed into it without any fixed rules or instructions. Although these models' prediction accuracies may be impressive, the system as a whole is a black-box (non-transparent). Hence, explaining the working of a model to the real world poses its own set of challenges. This work deals with interpreting vehicle fault-detection model. Current fault detection approaches rely on model-based or rule-based systems. With an increasing complexity of vehicles and their sub systems, these approaches will reach their limits in detecting fault root causes in highly connected and complex systems. Furthermore, current vehicles produce rich amounts of data valuable for fault detection which cannot be considered by current approaches. Deep Neural Networks (DNN) offer great capabilities to tackle these challenges and automatically train fault detection models using in-vehicle data. However, fault detection models based on DNNs (here, CNNs and LSTMs) are black boxes so it is nearly impossible to back-trace their outputs. Therefore, the aim of this work is to identify, implement and evaluate available approaches to interpret decisions made by DNNs applied in vehicle diagnostics. With that, decisions made by the DNN diagnostics model can be better understood to (i) comprehend the model's outputs and thus increase model performance as well as (ii) enhance their acceptability in vehicle development domain.

Learning the Motor Program of a Central Pattern Generator for Humanoid Robot Drawing

Deepanshu Makkar

Thu, 1. 11. 2018, Room 132

In this research project, we present a framework where a humanoid robot, NAO, acquires the parameter of a motor program in a task of drawing arcs in Cartesian space. A computational model based on Central Pattern Generator is used. For the purpose of drawing a scene, geometrical features such as arcs are extracted from images using Computer Vision algorithms. The algorithm used in the project which considers only important features for the purpose of robot drawing is discussed. These arcs can be described as a feature vector. A discussion is done on how genetic algorithms help us in parameter estimation for the motor representation for selected feature vector. This understanding of parameters is used further to generalize the acquired motor representation on the workspace. In order to have a generalization for achieving a mapping between the feature vector and the motor program, we propose an approximation function using a multilayer perceptron (MLP). Once the network is trained, we present different scenarios to the robot and it draws the sketches. It is worth noting that our proposed model generalizes the motor features for a set of joint configuration, unlike the traditional way of robots drawing by connecting intermediate points using inverse kinematics.

Cortical routines - from experimental data to neuromorphic brain-like computation

Prof. Dr. Heiko Neumann (Ulm University, Inst. of Neural Information Processing)

Tue, 30. 10. 2018, Room 1/336

A fundamental task of sensory processing is to group feature items that form a perceptual unit, e.g., shapes or objects, and to segregate them from other objects and the background. In the talk a conceptual framework is provided, which explains how perceptual grouping at early as well as higher-level cognitive stages may be implemented in cortex. Different grouping mechanisms are implemented which are attuned to basic features and feature combinations and evaluated along the forward sweep of stimulus processing. More complex combinations of items require integration of contextual information along horizontal and feedback connections to bind neurons in distributed representations via top-down response enhancement. The modulatory influence generated by such flexible dynamic grouping and prediction mechanisms is time-consuming and is primarily sequentially organized. The coordinated action of feedforward, feedback, and lateral processing motivates the view that sensory information, such as visual and auditory features, is efficiently combined and evaluated within a multiscale cognitive blackboard architecture. This architecture provides a framework to explain form and motion detection and integration, higher-order processing of articulated motion, as well as scene segmentation and figure-ground segregation of spatio-temporal inputs which are labelled by enhanced neuronal responses. In addition to the activation dynamics in the model framework, steps are demonstrated how unsupervised learning mechanisms can be incorporated to automatically build early- and mid-level visual representations. Finally, it is demonstrated that the canonical circuit architecture can be mapped onto neuromorphic chip technology facilitating low-energy non-von Neumann computation.

Neural Reflexive Controller for Humanoid Robots Walking

Rishabh Khari

Thu, 25. 10. 2018, Room 131

For nearly three decades, a great amount of research emphasis has been given in the study of robotic locomotion, where researchers, in particular, have focused on solving the problem of locomotion control for multi-legged humanoid robots. Especially, the task of imitating human walking has since been the most challenging one, as bi-pedal humanoid robots implicitly experience instability and tend to topple itself over. However, recently new machine learning algorithms have been approached to replicate the sturdy, dexterous and energy-efficient human walking. Interestingly many researchers have also proposed that the locomotion principles, although run on a centralized mechanism (central pattern generator) in conjunction with sensory feedback, they can also independently run on a purely localized sensory-feedback mechanism. Therefore, this thesis aims at designing and evaluating two simple reflex-based neural controllers, where the first controller generates a locomotion pattern for the humanoid robot by combining the sensory feedback pathways of the ground and joint sensors to the motor neuron outputs of the leg joints. The second controller makes use of the Hebb's learning rule by first deriving locomotion patterns from the MLMP-CPG controller while observing the sensory feedback simultaneously and finally generating motor-neuron outputs associatively. In the end, this thesis also proposes a fast switching principle where the output to motorneurons after a certain interval is swiftly transferred from the MLMP-CPG to the associative reflex controller. This is implemented to observe adaptive behavior present for centralized locomotor systems.

Improving autoregressive deep generative models for natural speech synthesis

Ferin Thunduparambil Philipose

Wed, 24. 10. 2018, Room 132

Speech Synthesis or Text To Speech (TTS) synthesis is a domain that has been of research interest for several decades. A workable TTS system would essentially generate speech from textual input. The quality of this synthesized speech would be gauged based on how similar it sounds to the human voice and the ease of understanding it clearly. .A fully end to end neural Text-To-Speech system has been set up and improved upon, with the help of WaveNet and Tacotron deep generative models. The Tacotron network acts as a feature prediction network that outputs the log-mel spectrograms, which are in-turn utilized by WaveNet as the local conditioning features. Audio quality was improved by the logmel local conditioning and the fine-tuning of hyper-parameters such as mini-batch size & learning rate. Computational effort was reduced by compressing the WaveNet network architecture.

Fatigue detection using RNN and transfer learning

Azmi Ara

Wed, 24. 10. 2018, Room 132

Driving car is a insecure activity which requires full attention. Any distraction can lead to dangerous consequences, such as accidents. While driving, many factors are involved, such as: fatigue, drowsiness, distractions. Drowsiness is a state between alert and sleep. For this reason, it is important to detect drowsiness in advance which will help in protecting the people from accidents. The research guides us to understand an implicit and efficient approach to detect the different levels of drowsiness. Every driver has different driving patterns. The developed system should be able to adopt to the changes of driver?s behavior. The aim of this thesis is to contribute to the study of detecting drivers drowsiness levels while driving through different approaches which integrates of two sensory data to improve detection performance.

Car localization in known environments

Prafull Mohite

Tue, 2. 10. 2018, Room 131

Localization in a broader sense is very wide topic and at present basic localization takes place with the help of GPS sensor but lacks accuracy which is important for Autonomous driving. To overcome this problem, there are different environmental sensors used (typically, Sonar, Lidar, Camera). Lidar sensor being very accurate in case of depth perception is the used. In this thesis, Simultaneous Localization And Mapping (SLAM) approach is selected. SLAM, as name suggested Localization and mapping is chicken egg problem and to solve it, we are creating map of an environment before performing localization. For mapping, Gmapping and for localization within map, Adaptive Monte Carlo Localization (AMCL) is selected. AMCL is basically a particle filter. After giving a map of an environment, the algorithm estimates the position and orientation of a car as it moves and senses the environment.

Image anonymization using GANs

Thangapavithraa Balaji

Mon, 24. 9. 2018, Room 131

Millions of images are being collected every day for applications to enable scene understanding, decision making, resource allocation and policing to ease the human life. Most of these applications doesn't require the identity of the people in the images.There is an increasing concern in these systems invading the privacy of the users and the public. On one side, the camera/robots can assist a lot in everyday life, but on the other side, the privacy of the user or the public should not be compromised. In this master thesis, a novel approach was implemented to anoymize faces in the datasets which enable privacy protection of the individuals in the datasets. The Generative Adversarial Network(GAN) approach was extended and the loss function was formulated in a combined fashion. The performance of conventional image anonymization techniques like blurring, cropping, pixelating were compared against GAN generated images using autonomous driving applications like object detection and semantic segmentation.

Training approaches onsemantic segementation using transfer learning, dataset quality assessment and intelligent data augmentation

Mohamed Riyazudeen Puliadi Baghdad

Mon, 24. 9. 2018, Room 131

Data Sparsity is one of the key problems that automotive industries face today. One way to overcome this is to use synthetic data that are generated from graphics engines or virtual world generator, that can be leveraged to train neural networks and accomplish tasks such as autonomous driving. The features learned from synthetic data yield better performance with a suitable training approach and some real data. The number of images in the synthetic dataset, and its similarity to real world dataset play a major role in transferring the learned features effectively across domains. This similarity in the distribution of these datasets was achieved through different approaches, the most effective one being Joint Adaptation Network Approach. Also, data augmentation in a smart way could boost the performance achieved. Intelligent data augmentation was achieved using conditional Generative Adversarial Networks and Color Augmentation technique. With the findings of this research work, a possible solution for tackling data sparsity problem was achieved.

Investigating Model-based Reinforcement Learning Algorithms for Continuous Robotic Control

Frank Witscher

Wed, 19. 9. 2018, Room 368

Obwohl model-free, deep Reinforcement Learning eine immer größer werdende Bandbreite an Aufgaben erfüllen kann, leiden die jeweiligen Algorithmen an einer großen Ineffizienz bezüglich der dafür erforderlichen Datenmenge. Model-based Reinforcement Learning, welches ein Dynamics Model der Umwelt erlernt, verspricht hierbei Abhilfe. Jüngste Forschungen kombinieren model-free Algorithmen mit model-based Ansätzen, um die Stärken beider Reinforcement Learning-Zweige auszunutzen. In meiner Verteidigung gebe ich eine Einleitung in model-based Reinforcement Learning und einen Überblick über die mögliche Nutzung von Dynamics Models, wie sie in neusten Publikationen zu finden ist. Wir konzentrieren uns dabei auf Umgebungen mit kontinuierlichen Action Spaces, wie sie in der Robotik anzutreffen sind. Temporal Difference Model ist ein solcher Hybrid aus model-free Learning mit model-based Control. Dieser wird im Detail vorgestellt und ausgewertet.

Sensor simulation and Depth map prediction on Automotive Fisheye camera using automotive deep learning

Deepika Gangaiah Prema

Wed, 12. 9. 2018, Room 131

The aim is to create a synthetic 3D environment which enables to obtain a supervised dataset using Unity framework and simulating different sensors like lidar and fisheye camera in the simulation environment. This dataset will be used to develop, test and validate different machine learning algorithms for automotive use cases. The big advantage of the simulation environment is the possibility to generate data from different sensors which are still under development and the final hardware is still not available. Another advantage is that the known ground truth of the simulation environment. This much cheaper than equipping a vehicle with those sensors, record lots of data and manually label the ground truth by humans. The 3D environment shall include urban and highway driving scenarios with balanced object categories like vehicles, pedestrians, trucks, terrain and street or free space to cover all levels for autonomous driving The simulation of a fish eye camera such as next generation lidar will be carried out in the thesis on the same Unity 3D framework, the generated images and point cloud data are used to generate different data sets. The final goal is to use this for training different models and test them on a real environment. Qualitative test are carried out by benchmarking the data sets with the aid of different algorithms. The aim of this thesis is to study the different approaches with which CNNs could be used in the task of depth estimation from a single fisheye camera image (180 degree FoV) for Autonomous Driving.

Humanoid robot learns walking by human demonstration

Juncheng Hu

Tue, 14. 8. 2018, Room 131

In this thesis, a method designed for making the humanoid robot walking is developed by using the Q learning based on MLMP-CPG and wrist sensors. Machine learning has demonstrated a promising feature in many fields including robotics. However, the supervised learning algorithms are more often applied. However, supervised learning like neural networks always need a massive amount of data to train, which is sometimes not permitted in the real situation. Although not much data is required in reinforcement learning, it needs many attempts in its environment thus concluding a strategy. For a humanoid robot, it is not allowed to have too many wrong attempts because a fall may lead to the injury of joints. In this thesis, a method that the robot learns walking with the help of a human can avoid accidental fallings is proposed.

Digital Twin Based Robot Control via IoT Cloud

Tauseef Al-Noor

Tue, 14. 8. 2018, Room 131

Digital Twin (DT) technology is the recent key technology for Industry 4.0 based monitoring and controlling industrial manufacturing and production. There are a lot of researches and development happening on DT based robot control. Monitoring and controlling the robot from a remote location is a complex process. In this research work, I have developed a prototype for controlling a robot using DT and cloud computing. Different technologies and techniques related to Digital Twin have been researched and analyzed to prepare an optimal solution based on this prototype. In this work, the latency of different types of machine to machine (M2M) communication protocols is observed. Different type of network protocols such as AMQP, MQTT, and HTTP has a lot of latency variation in the end to end data transfer communication. Furthermore, different external factors impact on persistent communication. For example, the cloud computing service as like as Azure?s data processing and throughput is not constant-time. A robot controlling mechanism expects a minimum constant time response for the quality of service. In this research, the main focus was to minimize communication latencies for a remote robot controller in a cloud-based communication. Finally, an average quality of service in the range of 2-5 seconds for persistent robot communication has been achieved based on different setup.

Vision-based Mobile Robotics Obstacle Avoidance with Deep Reinforcement Learning

Zishan Ahmed

Wed, 8. 8. 2018, Room 131

Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots. In this thesis, the problem of obstacle avoidance in simple 3D environments where the robot has to rely solely on a single monocular camera is considered. Inspired by the recent advantages of deep reinforcement learning (DRL) in Atari games and understanding highly complex situations in Go, the obstacle avoidance problem is tackled in this thesis as a data-driven end-to-end deep learning approach. An approach which takes raw images as input, and generates control commands as output is presented. The differences between discrete and continuous control commands are compared. Furthermore, a method to predict the depth images from monocular RGB images using conditional Generative Adversarial Networks (cGAN) is presented and the increase in learning performance by additionally fusing predicted depth images with monocular images is demonstrated.

Deep Convolutional Generative Adversarial Networks (DCGAN)

Indira Tekkali

Tue, 24. 7. 2018, Room 132

Generative Adversarial Networks (GAN) have made great progress in the recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However obtaining large number of image labels is expensive and time consuming. In this project, we investigate the unsupervised representation learning method that is DCGAN. We base our work on previous paper by Radford and al., and aim to replicate their results. When training our model on different datasets such as MNIST, CIFAR-10 and Vehicle dataset, we are able to replicate some results for e.g. smooth transmission.

Using Transfer Learning for Improving Navigation Capabilities of Common Cleaning Robot

Hardik Rathod

Tue, 10. 7. 2018, Room 131

A lot of robotic vacuum cleaners fail during the cleaning task because they get stuck under furniture or within cords or some other objects on the floor. Once such situation occurs, the robot is hardly able to free itself. One possible cause of this behavior is insufficient information of the environment, the robot enters. In unstructured environments, recognition of objects has been proven to be highly challenging. By executing an analysis of the environment before the cleaning operation starts, the robot will be aware of the objects around it, especially those that might harmful in the navigation. Methods from machine learning have been investigated and tested as they give impressive results in object detection tasks. Taking adequate actions according to objects in the environment helps to overcome or reduce the possibilities to getting stuck the robot under the objects, and eventually it reduces the effort of the customers. The insight from this analysis has been incorporated within the locomotion behavior of a dummy robot.

Vergence control on humanoid robots

Torsten Follak

Mon, 9. 7. 2018, Room 131

For the orientation in the 3D-space, a good depth is needed. This estimation is reached through effective stereoscopic vision. There the disparity between both eyes images is used to derive the 3D-structure. Therefore, it is important that both eyes are fixating at the same point. This fixation is managed by vergence control. To implement and use vergence control in robotics, different approaches exits. In this talk three of them are shown. A short overview of the two first is given, while the third one is presented in detail.

Docker for machine learning

Alexander J. Knipping and Sebastian Biermann

Tue, 3. 7. 2018, Room 131

Handling software dependencies for research and or production environments often comes with a certain amount of complexity. Libraries like TensorFlow or PyTorch don't always behave in the same way across several major version releases, especially in combination with various other third-party libraries, different Python versions and CUDA toolkits. Several solutions such as anaconda, virtualenv or pyenv have emerged from the Python community, but managing those in regards to reproducibility and portability often feels clumsy and leads to unexpected errors, especially for system administrators. In this presentation we will evaluate if Docker containers can be a more efficient way to encapsulate project code with its dependencies, to build once, ship anywhere. For demonstration we have used Docker to train a machine learning model able to recognize 194 birds by their calls, through a variation of an existing, VGG based, model trained on Google's Audioset and using their feature extractor for our own classes. Training was then performed on over 80 000 audio files of ten to twenty seconds length on nucleus. We will demonstrate how we have used Docker in our workflow from developing the model, training it on the nucleus node to deploying the model into a productive environment for users to query it. The aim of our project is to provide both users and system administrators an overview of how Docker works, what its benefits and costs are and if it's a viable option to use in typical machine learning workflows and environments.

Humanoid robot grasping in 3D space by learning an inverse model of a central pattern generator

Yuxiang Pan

Tue, 19. 6. 2018, Room 131

Grasping is one of the most important functions of humanoid robots. However, an inverse kinematics model for the robot arm is required to reach an object in the workspace. This model can be mathematically described using the exact robot parameters, or it can be learned without a prior knowledge about these parameters. The later has an advantage as the learning algorithm can be generalized to other robots. In this thesis, we propose a method to learn the inverse kinematics model of NAO humanoid robot using a multilayer perceptron (MLP) neural network. Robot actions are generated through the multi-layered multi-pattern central pattern generator (CPG) model. The camera captures the information of the object provided by the ArUco markers, the MLP model provides the desired arm configurations to reach the object, and then the CPG parameters are calculated to move the arm from its current position into the goal position. The proposed model have been tested in simulation, and on the real robot where a soft sensory robotic gripper was used to interact with a human subject (tactile servoing). Grasping was done using both the learned inverse model and the sensory feedback.

Humanoid robots learn to recover perturbation during swing motion in frontal plane: mapping pushing force readings into appropriate behaviors

Ibrahim Amer

Tue, 19. 6. 2018, Room 131

This thesis presents a learning method to tune recovery actions for humanoid robot during swinging movement based on central pattern generator. A continuous state space of robot is learned through self-organized map. A disturbance detection technique is proposed based on robot states and sub-states. Predefined recovery actions space are used in this thesis and they are composed of non-rhythmic patterns. A hill climb algorithm and a neural network have been used to tune the non-rhythmic patterns parameters to obtain the optimum values. A humanoid robot NAO was able to recover from disturbance with an adaptive reaction based on disturbance amplitude. All experiments were done on Webots simulation.

Scene Understanding on a Humanoid Robotic Platform Using Recurrent Neural Networks

Saransh Vora

Wed, 13. 6. 2018, Room 131

Since near perfect levels of performance have been reached for object recognition using convolutional neural networks. The ability to describe the content and organization of a complex visual of the scene is called scene understanding. In this thesis the deterministic attention model has been used with back propagation with two different pre-trained encoder CNN models along with a RNN as a decoder to generate captions. The trained attention model is then used for a humanoid robot to describe the scene. This would represent first step towards robotic scene understanding. The robot can not only associate words with images but it can also point at the locations of features which are attended to and locate them in space.

Transferring deep Reinforcement Learning policies from simulations to real-world trajectory planning

Vinayakumar Murganoor

Tue, 5. 6. 2018, Room 131

Machine learning is really progressed a lot in recent days but most of the applications and demonstrations are done in simulated environments, especially with continuous control tasks. When it comes to continues control tasks, the reinforcement learning algorithms are proven to produce the good policies. In this project, the problem of trajectory planning is solved using reinforcement learning algorithms, where the simulated trained agent in the real-world moves the RC car from any given point A to point B with no training in real world itself. Also identified the minimum parameters that influence the agent behavior in the real world and listing out the problems and solutions found during the transfer of the policy from simulation to the real world.

Investigating dynamics of Generative Adversarial Networks (GANs)

Vivek Bakul Maru

Tue, 29. 5. 2018, Room 131

Generative Adversarial Networks (GANs) are very recent and promising approach in generative models. GANs are the approaches to solve problems by unsupervised learning using deep neural networks. GANs work on an adversarial principle, where two different neural networks are fighting with each other to get better. This research project aims to understand the underlying mechanism of GANs. GANs certainly have proved to have an edge over all the existing generative models like Variational autoencoders and Autoregressive models but they are known to suffer instability while training. Implementation research in this project focuses on investigating the issues regarding training of GANs and the convergence properties of the model. Apart from vanilla GAN, this project also focuses on the extension of regular GAN using convolutional neural networks, called Deep Convolutional GAN and one of the very recently proposed approaches called, Wasserstein GAN. Analysis of the travel of the loss functions insights into the convergence properties. Training of these models on multiple datasets allowed to compare and observe the learning of both the networks in GAN.

Design and Fabrication of Complex Sensory Structure of Curving and Pressure Sensors for Soft Robotic Hand

Vishal Ghadiya

Wed, 23. 5. 2018, Room 131

This Research project represents the prototype design of the complex sensory structure for a soft hand. This can be easily adapted to the soft material like silicon. A superposition of four piezoresistive pressure sensors and one curving sensor was arranged on the inner face of each finger. This research focuses on the design of flexible pressure and curving Sensors, in particular to the response of force sensitive resistor based pressure sensor. Thanks to the multi-layered design of Sensors, the structure was able to measure the curve of the finger and amount of tactile pressure applied by the object grasped to the hand. Sixteen pressures sensor and four curving sensors with Velostat as piezoresistive layer were designed with a diversity of electrode material i.e. conductive thread, with and without using of conductive fabric. The multilayer structure of pressure and the curving sensor can be placed on the inner face of the soft hand and easily evaluate the properties of the object such as size and stiffness of the object.

... older

Presseartikel