Springe zum Hauptinhalt
Robotik und Mensch-Technik-Interaktion
Publikationen

Publikationen

Im folgenden sind unsere Veröffentlichungen, dem Jahr entsprechend, aufgelistet.

S. Schwarz, C. Gaebert, B. Nieberle, U. Thomas
Virtually Guided Telemanipulation using Neural RRT-Based Planning
In Proceedings of the 4th IFSA Winter Conference on Automation, Robotics & Communications for Industry 4.0 / 5.0 (ARCI’ 2024), Innsbruck, Austria, 7-9 February 2024
DOI: 10.13140/RG.2.2.20923.18722

Telemanipulation is a widely used approach to safely perform tasks remotely. In doing so, the operator can interact with the world using a robot manipulator. A key challenge is moving the robot in a collision-free manner given only a limited view of the environment. To this end, the authors combine a haptic feedback approach with rapid path planning and gaze tracking. This allows for a guided motion that prevents collisions with the environment while maintaining the necessary degree of flexibility and reactiveness. A Neural RRT-Connect algorithm is used to compute a collision-free motion towards a desired goal pose. A virtual fixture, based on a spring-damper system, is used to generate the force that is applied by the input device. A comparison between guided and fully manual telemanipulation in a supermarket-like scenario shows that the shared control approach reduces the task execution time and improves accuracy and collision avoidance. Finally, the neural planning algorithm proves to be applicable in this scenario by generating optimized paths under a second at a success rate of 100 %.

@INPROCEEDINGS{Schwarz2024,
author = {Schwarz, Stephan Andreas and Gaebert, Carl and Nieberle, Benedikt and Thomas, Ulrike},
title = {Virtually Guided Telemanipulation using Neural RRT-based Planning},
booktitle = {Proceedings of the 4th IFSA Winter Conference on Automation, Robotics and Communications for Industry 4.0/5.0 (ARCI 2024)},
year = {2024},
month = {02},
pages = {256-259},
publisher={IFSA Publishing, S. L.},
editor={Prof., Dr., Sergey Y. Yurish},
doi = {10.13140/RG.2.2.20923.18722}
}


C. Bandi, U. Thomas
Hand Mesh and Object Pose Reconstruction using Cross Model Autoencoder
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP, 183-193, Rome, Italy, 2024
DOI: 10.5220/0012370700003660

Hands and objects severely occlude each other, making it extremely challenging to estimate the hand-object pose during human-robot interactions. In this work, we propose a framework that jointly estimates 3D hand mesh and 6D object pose in real-time. The framework shares the features of a single network with both the hand pose estimation network and the object pose estimation network. Hand pose estimation is a parametric model that regresses the shape and pose parameters of the hand. The object pose estimation network is a cross-model variational autoencoder network for direct reconstruction of an object’s 6D pose. Our method shows substantial improvement in object pose estimation on two large-scale open-source datasets.

@conference{visapp24,
author={Chaitanya Bandi. and Ulrike Thomas.},
title={Hand Mesh and Object Pose Reconstruction Using Cross Model Autoencoder},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP},
year={2024},
pages={183-193},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012370700003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

S. Schwarz, C. Gaebert, U. Thomas
6D Dynamic Tool Compensation using Deep Neural Networks to improve Bilateral Telemanipulation
2nd Workshop Toward Robot Avatars - IEEE International Conference on Robotics and Automation (ICRA), London, UK, 2023
DOI: Link

Force feedback is a crucial component to improve the accuracy and transparency in telemanipulation. Unfortunately, attached tools distort the measured forces of the force sensor. Thus, a compensation of the static and dynamic forces and torques is desired to estimate the robot’s actual interactions with the environment. Due to the inaccuracy of model-based approaches, this paper presents a model-free approach to estimate the 6D forces and torques resulting from an attached tool to compensate the measurement of the force-torque sensor. We use a deep neural network to achieve this and compare multiple combinations of neuron numbers and inputs with an already existing approach. Experiments on a real telemanipulation setup show that the proposed algorithm has a higher accuracy with mean Euclidean errors of only [0.7307 ± 0.4974] N in force and [0.031 ± 0.02] Nm in torque. The low computation time of 0.12 ms makes it suitable for real-time applications such as telemanipulation.

@INPROCEEDINGS{Schwarz2023,
author = {Schwarz, Stephan Andreas and Gaebert, Carl and Thomas, Ulrike},
title = {6D Dynamic Tool Compensation using Deep Neural Networks to improve Bilateral Telemanipulation},
maintitle = {IEEE International Conference on Robotics and Automation (ICRA)},
booktitle = {2nd Workshop Toward Robot Avatars},
year = {2023},
month = {05},
url = {https://www.ais.uni-bonn.de/ICRA2023AvatarWS/contributions/ICRA_2023_Avatar_WS_Schwarz.pdf},
urldate = {2024-02-13}
}


H. Zhu, U. Thomas
Mechnical Design of a Biped Robot FORREST and an extended Capture Point based Walking Pattern Generator
Published in Robotics, an Open Access Journal by MDPI, (Impact Factor 4.9), Special Issues on Kinematics and Design V.
DOI: 10.3390/robotics12030082

In recent years, many studies have shown that soft robots with elastic actuators enable robust 1 interaction with the environment. Compliant joints can protect mechanical systems and provide better dynamic 2 performance, thus offering huge potential for further developments of humanoid robots. This paper proposes 3 a new biped robot. The new robot combines a torque sensor-based active elastic hip and 4 a spring-based passive elastic knee/ankle. In the first part, the mechanical design is introduced, and in the 5 second part, the kinematics and dynamics capabilities are described. Furthermore, we introduce a new extended 6 capture-point-based walking pattern generator that calculates footstep positions, which are used as input for the 7 controller of our new biped robot. The main contribution of this article is the novel mechanical design and an 8 extended walking pattern generator. The new design offers a unique solution for cable-driven bipeds to achieve 9 both balancing and walking. Meanwhile, the new walking pattern generator can generate smooth desired curves, 10 which is an improvement over traditional generators that use a constant zero-moment-point (ZMP). A simple 11 cartesian controller is applied to test the performance of the walking pattern generator. Despite the robot is built, 12 all experiments regarding the pattern generator still are in simulation using MATLAB/Simulink. The focus of 13 this work is to analyze the mechanical design and show the capabilities of the robot by applying a new pattern 14 generator.

n.a.


H. Zhu, U. Thomas
An Enhanced Walking Pattern Generator with Variable Height for Robot Locomotion
Published in IEEE 19th International Conference on Automation Science and Engineering (CASE) in New Zealand , 2023
DOI: 10.1109/CASE56687.2023.10260344

In this paper, we introduce a novel walking pattern generator that builds on the divergent component of motion (DCM) techniques for biped robots. The aim is to provide an efficient algorithm for generating robot footstep position while walking . We build a new biped robot Forrest. For this robot, but also applicable for any biped robot, we developed a new gait pattern generator. Our approach involves utilizing segmented zero moment point (ZMP) curves within each step to generate a smoother desired trajectory. By this way we overcome the issue of abrupt acceleration changes experienced when using traditional walking pattern generators with a constant ZMP. Additionally, our generator employs a 3-D divergent component of motion (DCM) curve to plan for variable heights in the center of mass (CoM) trajectory, which is crucial for achieving a walking pattern that resembles human walking. We introduce this new gait generator and show first result on our biped robot.

@INPROCEEDINGS{10260344,
author={Zhu, Hongxi and Thomas, Ulrike},
booktitle={2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)},
title={An Enhanced Walking Pattern Generator with Variable Height for Robot Locomotion},
year={2023},
volume={},
number={},
pages={1-7},
doi={10.1109/CASE56687.2023.10260344}
}


S. Schwarz, U. Thomas
Vision-based Shared Control for Telemanipulated Nasopharyngeal Swab Sampling
Published in International Symposium on Medical Robotics (ISMR), 2023
DOI: 10.1109/ISMR57123.2023.10130223

Telemanipulation enables people to perform tasks in dangerous environments without exposing them to any risk. This also applies for medical applications. Many infections, such as the SARS-CoV-2 virus, spread over the air and can infect the staff while, e.g., taking samples. This paper proposes a shared control algorithm for a telemanipulation system that enables medical staff to easily perform nasopharyngeal swab samplings from a safe distance while maintaining the safety of the patient. We propose a vision-based virtual fixture approach to guide the operator during the approach towards the nostril. Force feedback and velocity scaling is used to improve dexterity and safety during the sampling. We further prove the stability of the system by introducing an energy tank that ensures passivity at all times. Finally, we test the approach on a real telemanipulation setup and demonstrate the improved usability resulting from the guidance of the shared control.

@INPROCEEDINGS{10130223,
author={Schwarz, Stephan Andreas and Thomas, Ulrike},
booktitle={2023 International Symposium on Medical Robotics (ISMR)},
title={Vision-Based Shared Control for Telemanipulated Nasopharyngeal Swab Sampling},
year={2023},
volume={},
number={},
pages={1-7},
doi={10.1109/ISMR57123.2023.10130223}
}


C. Geabert, C. Bandi, U. Thomas
Grasp Pose Generation for Human-to-Robot Handovers using Simulation-to-Reality Transfer
Accepted at 1st International Conference on Hybrid Societies, 2023

Human-to-robot handovers play an important role in collaborative tasks in industry or household assistance. Due to the vast amount of possible unknown objects, learning-based approaches gained interest for robust and general graspsynthesis. However, obtaining real training data for such methods requires expensive human demonstrations. Simulated data, on the other hand, is easy to generate and can be randomized to cover the distribution of real world data. The first contribution of this work is a dataset for human grasps generated in simulation. For this, we use a simulated hand and models of 10 objects from the YCB dataset [Calli et al., 2015]. It can also be easily extended to include new objects. The method thus allows for generating an arbitrary amount of training data without human interactions. Secondly, we combine a generative neural grasp generator with an evaluator model for grasp pose generation. In contrast to previous works, we obtain grasp poses from simulated RGB images which allows for reducing the negative effects of depth sensor noise. To this end, our generator model is provided with a cropped image of the human hand and learns the distribution of grasps in the wrist system. The evaluator then narrows down the list of grasps to the most promising ones. The presented approach requires the model to extract relevant features from images instead of point clouds. A cost-efficient method for generating large amounts of training data is therefore needed. We test our approach in simulation and transfer it to a real robot system.We use the same objects as in the training dataset but also test the generalization capabilities towards new objects. The presented dataset is available for download:
https://tuc.cloud/index.php/s/g3noZD7oCqbQR9d.

n.a.


S. Kaden, C. Geabert, U. Thomas
Towards Smooth Human-Robot Interaction using Potential Gradient-Based Sampling
Accepted at 1st International Conference on Hybrid Societies, 2023

Successful human-robot interaction calls for fast generation of collision-free and optimized motions. To this end, sampling-based motion planning algorithms have been widely used. However, they often require long planning times to achieve optimized motions. While not being a critical issue in traditional industrial applications, planning time delays or poorly optimized motions have very negative effects on human-robot cooperation. Including artificial potential fields in the sampling algorithm can drastically improve the quality and planning time of such methods. Previous works in this direction are often tailored towards minimizing distance costs such as path length. In this work, we propose a heuristic based on potential fields that can also be used with a variety of state cost functions. We demonstrate the effectiveness of our approach using two cost functions related to humanrobot interaction. We achieve drastically improved results in both scenarios. This allows for reducing total planning time and achieving a smoother interaction between human and robot.

n.a.


C. Bandi, U. Thomas
Face-Based Gaze Estimation Using Residual Attention Pooling Network
In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP, 541-549, Lisbon, Portugal, 2023
DOI: 10.5220/0011789200003417

Gaze estimation reveals a person’s intent and willingness to interact, which is an important cue in human-robot interaction applications to gain a robot’s attention. With tremendous developments in deep learning architectures and easily accessible cameras, human eye gaze estimation has received a lot of attention. Compared to traditional model-based gaze estimation methods, appearance-based methods have shown a substantial improvement in accuracy. In this work, we present an appearance-based gaze estimation architecture that adopts convolutions, residuals, and attention blocks to increase gaze accuracy further. Face and eye images are generally adopted separately or in combination for the estimation of eye gaze. In this work, we rely entirely on facial features, since the gaze can be tracked under extreme head pose variations. With the proposed architecture, we attain better than state-of-the-art accuracy on the MPIIFaceGaze dataset and the ETH-XGaze open-source benchmark.

@conference{visapp23,
author={Chaitanya Bandi. and Ulrike Thomas.},
title={Face-Based Gaze Estimation Using Residual Attention Pooling Network},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={541-549},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011789200003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}


C. Bandi, U. Thomas
A New Efficient Eye Gaze Tracker for Robotic Applications
Published in IEEE International Conference on Robotics and Automation (ICRA), 2023
DOI: 10.1109/ICRA48891.2023.10161347

Gaze estimation exposes a person’s intention and interaction willingness with a robot, which is an important cue in human-robot collaborative applications to obtain a robot’s attention. With enormous developments in deep learning architectures, convolution-based eye gaze estimation has acquired a lot of attention. Appearance-based methods have shown a significant improvement in gaze accuracy and works in unconstrained environments unlike traditional approaches. We introduce yet another appearance-based gaze estimation architecture in this work, to boost the angular accuracy even further. In this work, we completely rely on face images as eye gaze can be estimated in the case of extreme head pose variations and varied distances.With the proposed architecture, we achieve better than the state-of-the-art accuracy of 3.809◦ on MPIIFaceGaze dataset and 3.96◦ on ETH-XGaze open-source benchmark. In addition, we test the eye gaze tracking in realtime robotic applications like attention grabing, pick and place.

@INPROCEEDINGS{10161347,
author={Bandi, Chaitanya and Thomas, Ulrike},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={A New Efficient Eye Gaze Tracker for Robotic Applications},
year={2023},
volume={},
number={},
pages={6153-6159},
doi={10.1109/ICRA48891.2023.10161347}
}


C. Gaebert, S. Kaden, B. Fischer, U. Thomas
Parameter Optimization for Manipulator Motion Planning using a Novel Benchmark Set
Published in IEEE International Conference on Robotics and Automation (ICRA), 2023
DOI: 10.1109/ICRA48891.2023.10160694

Sampling-based motion planning algorithms have been continuously developed for more than two decades. Apart from mobile robots, they are also widely used in manipulator motion planning. Hence, these methods play a key role in collaborative and shared workspaces. Despite numerous improvements, their performance can highly vary depending on the chosen parameter setting. The optimal parameters depend on numerous factors such as the start state, the goal state and the complexity of the environment. Practitioners usually choose these values using their experience and tedious trial and error experiments. To address this problem, recent works combine hyperparameter optimization methods with motion planning. They show that tuning the planner's parameters can lead to shorter planning times and lower costs. It is not clear, however, how well such approaches generalize to a diverse set of planning problems that include narrow passages as well as barely cluttered environments. In this work, we analyze optimized planner settings for a large set of diverse planning problems. We then provide insights into the connection between the characteristics of the planning problem and the optimal parameters. As a result, we provide a list of recommended parameters for various use-cases. Our experiments are based on a novel motion planning benchmark for manipulators which we provide at https://tuc.cloud/index.php/s/aSRXr7gTdLDefH3.

@INPROCEEDINGS{10160694,
author={Gaebert, Carl and Kaden, Sascha and Fischer, Benjamin and Thomas, Ulrike},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={Parameter Optimization for Manipulator Motion Planning using a Novel Benchmark Set},
year={2023},
volume={},
number={},
pages={9218-9223},
doi={10.1109/ICRA48891.2023.10160694}
}


S. Schwarz, U. Thomas
Human-Robot Interaction in Telemanipulation - An Overview
Accepted at 1st International Conference on Hybrid Societies, 2023

Teleoperation and haptic telemanipulation is a common solution to perform tasks from a remote distance. It is very suitable and can be used in dangerous or unreachable environments, such as nuclear power plants, space missions or under water. In recent years, especially due to the COVID-19 pandemic, telemanipulation is increasingly used to perform tasks involving other human participants. This paper gives an overview of the state of the art regarding control concepts to improve human-robot interaction on the follower side of a telemanipulation system. In this context, system architectures and shared control approaches are considered. We also present the work done in the Collaborative Research Center 1410 regarding telemanipulation including a safety mechanism as well as two shared control concepts to improve human-likeness, safety and mobility of the follower motion.

n.a.

Y. Ding
Fast Perception-Action Loops with Proximity Sensors for Robotic Manipulators
Dissertation, 2022, September
ISBN: 978-3-8440-8762-8

Proximity sensors attached to the outer shell of robotic manipulators provide fast and occlusion-free perception capabilities of the robot’s nearby environment. They offer a solution towards fenceless collaborative workspaces by closing the gap in perception be- tween (3D depth) cameras and tactile/force sensing. The perception gap occurs at the robot’s close range, where external cameras provide insufficient information due to noise, resolution, and occlusion, and where tactile sensors remain untriggered. This thesis ex- amines the fast perception-action loop of such systems to increase safety with reactive obstacle and collision avoidance motions and proactive adaption for impact attenuation. The loop consists of three elements: proximity perception, reactive motion generation, and the proactive adaption of the robot parameters. The first part of the safety chain shows an on-robot proximity perception system. The concept behind the system is to combine two sensors. Laser-based time-of-flight sensing is used for far-range while capacitive proximity detection covers the blind areas by wide- area detection. A novel custom-designed capacitive proximity sensor is presented that is robust against different grounding conditions of obstacles, a significant issue of con- ventional capacitive proximity sensors. The perception system has characteristic features by providing rich near-field information with a limited quantity of measurement points, minimizing the amount of redundant information, and thus increasing responsiveness. Reactive motions require only a few data points for fast motion generation and benefit from these features, especially for collision avoidance, where instantaneous adjustments of the robot’s trajectory are mandatory. This thesis proposes two methods, one based on finding an avoidance vector with sampling in orthogonal directions towards the obsta- cle and another one by extending quadratic optimization to integrate the avoidance task within optimization constraints. Compared to common repulsive motions for collision avoidance, the proposed motion generators are less restrictive. They make full use of the robot’s redundancy for task retention and provide solutions for multi-obstacle whole-arm obstacle avoidance. The algorithms further focus on evasive motions to bypass obstacles to decrease the risk of the robot freezing problem appearing. A phenomenon in which the robot gets stuck in local minima where it stops before obstacles in an equilibrium state of attraction towards the goal and repulsion from the obstacle. The third part addresses the issue that collisions cannot always be prevented because the required avoidance motion exceeds the robot’s motion capabilities. The last safety layer relies on the anticipation of contacts with proximity sensors to enhance the effectiveness impedance controllers for impact attenuation. The first measure modulates the stiffness of the impedance controllers as required, allowing faster, more accurate motions during regular operation while maintaining safety. A high stiffness setup suppresses positional disturbances during regular operation of the robot for high accuracy. The stiffness de- creases only before impacts with safety as highest priority. The second measure slightly modifies the joint configuration to decrease the effective inertia of the manipulator at the impact point.

@phdthesis{...,
month = {September},
author = {Yitao Ding},
series = {Fortschritte der Robtik / Progress in Robotics},
editor = {Ulrike Thomas},
title = {Fast Perception-Action Loops with Proximity Sensors for Robotic Manipulators},
publisher = {Shaker Verlag},
year = {2022},
keywords = {Proximity Servoing; Proximity Perception; Capacitive Proximity Sensors; Reactive Motions; Obstacle Avoidance; Collision Avoidance; Impact Attenuation},
}


C. Bandi, U. Thomas
Regression-Based 3D Hand Pose Estimation for Human-Robot Interaction
Published in Communications in Computer and Information Science book series (CCIS, Volume 1474), 2022
DOI: 10.1007/978-3-030-94893-1_24

In shared workspaces where humans and robots interact, a significant task is to hand over objects. The process of hand over needs to be reliable, a human must not be injured during the process, hence reliable tracking of human hands is necessary. To avoid collision, we apply an encoder-decoder based 2D and 3D keypoint regression network on color images. In this paper, we introduce a complete pipeline with the idea of stacked and cascaded convolutional neural networks and tune the parameters of the network for real-time applications. Experiments are conducted on multiple datasets, with low and high occlusions and we evaluate the trained models on multiple datasets for the human-robot interaction test set.

@InProceedings{10.1007/978-3-030-94893-1_24,
author="Bandi, Chaitanya and Thomas, Ulrike",
editor="Bouatouch, Kadi and de Sousa, A. Augusto and Chessa, Manuela and Paljic, Alexis and Kerren, Andreas and Hurter, Christophe and Farinella, Giovanni Maria and Radeva, Petia and Braz, Jose",
title="Regression-Based 3D Hand Pose Estimation for Human-Robot Interaction",
booktitle="Computer Vision, Imaging and Computer Graphics Theory and Applications",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="507--529",
abstract="In shared workspaces where humans and robots interact, a significant task is to hand over objects. The process of hand over needs to be reliable, a human must not be injured during the process, hence reliable tracking of human hands is necessary. To avoid collision, we apply an encoder-decoder based 2D and 3D keypoint regression network on color images. In this paper, we introduce a complete pipeline with the idea of stacked and cascaded convolutional neural networks and tune the parameters of the network for real-time applications. Experiments are conducted on multiple datasets, with low and high occlusions and we evaluate the trained models on multiple datasets for the human-robot interaction test set.",
isbn="978-3-030-94893-1"
}


H. Alagi, S. Ergun, Y. Ding, T. Philip Huck, U. Thomas, H. Zangl, B. Hein
Evaluation of On-Robot Capacitive Proximity Sensors with Collision Experiments for Human-Robot Collaboration (HRC)
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
DOI: 10.1109/IROS47612.2022.9981490

A robot must comply with very restrictive safety standards in close human-robot collaboration applications. These standards limit the robot's performance because of speed reductions to avoid potentially large forces exerted on humans during collisions. On-robot capacitive proximity sensors (CPS) can serve as a solution to allow higher speeds and thus better productivity. They allow early reactive measures before contacts occur to reduce the forces during collisions. An open question on designing the systems is the selection of an adequate activation distance to trigger safety measures for a specific robot while considering latency and detection robustness. Furthermore, the systems' actual effectiveness of impact attenuation and performance gain has not been evaluated before. In this work, we define and conduct a unified test procedure based on collision experiments to determine these parameters and investigate the performance gain. Two capacitive proximity sensor systems are evaluated on this test strategy on two robots. This work can serve as a reference guide for designing, configuring and implementing future on-robot CPSs.

@INPROCEEDINGS{9981490,
author={Alagi, Hosam and Ergun, Serkan and Ding, Yitao and Huck, Tom P. and Thomas, Ulrike and Zangl, Hubert and Hein, Björn},
booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Evaluation of On-Robot Capacitive Proximity Sensors with Collision Experiments for Human-Robot Collaboration},
year={2022},
volume={},
number={},
pages={6716-6723},
doi={10.1109/IROS47612.2022.9981490}
}


S. Schwarz, U. Thomas
Variable Impedance Control for Safety and Usability in Telemanipulation
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
DOI: 10.1109/IROS47612.2022.9982118

In recent years, haptic telemanipulation has been introduced to control robots remotely with an input device that generates force feedback. Compliant control strategies are needed to ensure safe interaction between humans and robots. Accurate and precise manipulation requires a stiff setup of the impedance parameters, while safety demands for low stiffnesses. This paper proposes an impedance-based control approach that combines stiff manipulation with a safety mechanism that adapts compliance when required. We introduce three system modes: operation, safety and recovery mode. If the external forces exceed a defined force threshold, the system switches to the compliant safety mode. A user input triggers the recovery process that increases the stiffness back to its nominal value. This paper suggests an energy tank, which limits the change of stiffness to ensure stability during recovering phase.

@INPROCEEDINGS{9982118,
author={Schwarz, Stephan Andreas and Thomas, Ulrike},
booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Variable Impedance Control for Safety and Usability in Telemanipulation},
year={2022},
volume={},
number={},
pages={6177-6182},
doi={10.1109/IROS47612.2022.9982118}
}


C. Gaebert, U. Thomas
Learning-based Adaptive Sampling for Manipulator Motion Planning
Published in IEEE 18th International Conference on Automation Science and Engineering (CASE), 2022
DOI: 10.1109/CASE49997.2022.9926609

Fast generation of optimized robot motions is crucial for achieving fluent cooperation in shared workspaces. Established sampling-based motion planning algorithms are guaranteed to converge to an optimal solution but often deliver low-quality initial results. To this end, learning-based methods reduce planning time delays and increase motion quality. Existing methods show promising results for low-dimensional and simulated problems. In the real world, sensor noise or a change of the robot's tool can cause a distributional shift to the training data. An adaptive sampling strategy is thus required to cope with possibly suboptimal samples and ensure fast motion planning in human-robot collaboration. In this work, we present a sampling strategy for fast and efficient manipulator motion planning which is based on a conditional variational autoencoder. We test our model for three optimization objectives: path length in configuration space and workspace, as well as joint limit distances. In contrast to other works, we not only condition our model on the planning problem but also on motion progress. This allows for generating samples in the growth direction of the tree. Using our method, we obtain high-quality initial paths within less than one second of planning time.

@INPROCEEDINGS{9926609,
author={Gaebert, Carl and Thomas, Ulrike},
booktitle={2022 IEEE 18th International Conference on Automation Science and Engineering (CASE)},
title={Learning-based Adaptive Sampling for Manipulator Motion Planning},
year={2022},
volume={},
number={},
pages={715-721},
doi={10.1109/CASE49997.2022.9926609}
}


C. Gaebert, A. Djemal, H. Hellara, B. Ben Atitallah, R. Ramalingame, R. Barioul, D. Salzseiler, E. Fricke, O. Kanoun, U. Thomas
Gesture Based Symbiotic Robot Programming for Agile Production
Published in IEEE 9th International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2022
DOI: 10.1109/CIVEMSA53371.2022.9853686

Agile production lines call for an effective and intuitive way of programming robots. However, traditional approaches rely on providing low-level instructions using either a script-based language or a graphical user interface to export low-level instructions. This, however, can be tedious for assembly tasks. In this work, we present an approach that generates low-level robot control commands from highly abstract communicative hand gestures. In contrast to other works, we use several abstraction layers to generate such commands with as little user input as possible. For this, we use a body-attached multi-sensor setup consisting of a pressure band, a smart glove, EMG and IMU units. Their combined signals define a multi-dimensional vector per time step. We use a Recurrent Neural Network to infer the gesture class from the pre-processed data stream. From these user inputs we generate a set of symbolic spatial relations describing the assembly process. This formal description is then used to select and execute robot skills such as grasping. Hence, we reduce the ambiguity of abstract instructions in several steps and allow for effective gesture-based robot programming. In our work we give insights in defining and detecting such gestures. In addition, we illustrate the functionality of the whole system at real-world examples.

@INPROCEEDINGS{9853686,
author={Gäbert, Carl and Djemal, Achraf and Hellara, Hiba and Atitallah, Bilel Ben and Ramalingame, Rajarajan and Barioul, Rim and Salzseiler, Dennis and Fricke, Ellen and Kanoun, Olfa and Thomas, Ulrike},
booktitle={2022 IEEE 9th International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)},
title={Gesture Based Symbiotic Robot Programming for Agile Production},
year={2022},
volume={},
number={},
pages={1-6},
doi={10.1109/CIVEMSA53371.2022.9853686}
}


H. Zhu, U. Thomas
A Novel Full State Feedback Decoupling Controller For Elastic Robot Arm
Published in IEEE International Conference on Robotics and Automation (ICRA), 2022
DOI: 10.1109/ICRA46639.2022.9812047

In this paper a novel full state feedback approach for control of compliant actuated robot with nonlinear spring characteristics is presented. A multi-DOF elastic robot arm is a multi-input multi-output (MIMO) under-actuated system. By the new novel controller, which is based on motor coordinate transformation and motor inertia shaping, the MIMO system can be converted into a set of decoupled single-input singleoutput (SISO) systems. Using full state feedback controller, we can configurate the poles of each SISO system. The controller is validated by an 3-DOF elastic robot with nonlinear spring characteristics in simulation of MATLAB/Simulink.

@INPROCEEDINGS{9812047,
author={Zhu, Hongxi and Thomas, Ulrike},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
title={A Novel Full State Feedback Decoupling Controller For Elastic Robot Arm},
year={2022},
volume={},
number={},
pages={3210-3215},
doi={10.1109/ICRA46639.2022.9812047}
}


C. Bandi, H. Kisner, U. Thomas
3D Hand and Object Pose Estimation for Real-Time Human-Robot Interaction
In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, 770-780, 2022
DOI: 10.5220/0010902400003124

Estimating 3D hand pose and object pose in real-time is essential for human-robot interaction scenarios like handover of objects. Particularly in handover scenarios, many challenges need to be faced such as mutual hand-object occlusions and the inference speed to enhance the reactiveness of robots. In this paper, we present an approach to estimate 3D hand pose and object pose in real-time using a low-cost consumer RGB-D camera for human-robot interaction scenarios. We propose a cascade of networks strategy to regress 2D and 3D pose features. The first network detects the objects and hands in images. The second network is an end-to-end model with independent weights to regress 2D keypoints of hands joints and object corners, followed by a 3D wrist centric hand and object pose regression using a novel residual graph regression network and finally a perspective-n-point approach to solve 6D pose of detected objects in hand. To train and evaluate our model, we also propose a small-scale 3D hand pose dataset with a new semi-automated annotation approach using a robot arm and demonstrate the generalizability of our model on the state-of-the-art benchmarks.

@conference{visapp22,
author={Chaitanya Bandi. and Hannes Kisner. and Urike Thomas.},
title={3D Hand and Object Pose Estimation for Real-time Human-robot Interaction},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2022},
pages={770-780},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010902400003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}


H. Kisner, Y. Ding, U. Thomas
Chapter 4 - Capacitive Material Detection for Robotic Grasping Applications
Booktitle: Tactile Sensing, Skill Learning and Robotic Dexterous Manipulation, 2022
DOI: 10.1016/B978-0-32-390445-2.00011-8

Objects with different materials are difficult to distinguish by vision and tactile alone when they are similarly shaped and colored. This is especially important in robotic grasping scenarios, where the grasping task can benefit from incorporating material properties and the ability of their detection in a contactless and, therefore, non-destructive way. The robot can adapt its grasping behavior according to the perceived specific material surfaces. We have demonstrated the use of machine learning on impedance spectra generated from capacitive proximity sensors for material detection. This book chapter provides an introduction to this topic and serves as a guideline for implementation. We also want to provide a more profound investigation into how this approach can be extended with adjustments in the classification pipeline to provide more nuanced and more detailed classifications (e.g., differentiate between different types of woods and metals). We discuss the possibilities and limitations of our approach for grasping.

@incollection{Kisner2022,
title = {Capacitive material detection with machine learning for robotic grasping applications},
editor = {Qiang Li and Shan Luo and Zhaopeng Chen and Chenguang Yang and Jianwei Zhang},
booktitle = {Tactile Sensing, Skill Learning, and Robotic Dexterous Manipulation},
publisher = {Academic Press},
pages = {59-79},
year = {2022},
isbn = {978-0-323-90445-2},
doi = {https://doi.org/10.1016/B978-0-32-390445-2.00011-8},
url = {https://www.sciencedirect.com/science/article/pii/B9780323904452000118},
author = {Hannes Kisner and Yitao Ding and Ulrike Thomas},
keywords = {material classification, machine learning, grasp perception, capacitive sensors},
abstract = {Objects that are made of different materials are difficult to distinguish by vision and sensing alone when they are similarly shaped and colored. This is especially important in robotic grasping scenarios, where the grasping task can benefit from incorporating material properties and the ability of their detection in a contactless and, therefore, nondestructive way. The robot can adapt its grasping behavior according to the perceived specific material surfaces. We have demonstrated the use of machine learning on impedance spectra generated from capacitive proximity sensors for material detection. This book chapter provides an introduction to this topic and serves as a guideline for implementation. We also want to provide a more profound investigation into how this approach can be extended with adjustments in the classification pipeline to provide more nuanced and more detailed classifications (e.g., differentiate between different types of woods and metals). We discuss the possibilities and limitations of our approach for grasping.}
}

C. Gäbert, S. Kaden, U. Thomas
Generation of Human-like Arm Motions using Sampling-based Motion Planning
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
DOI: 10.1109/IROS51168.2021.9636068

Natural and human-like arm motions are promising features to facilitate social understanding of humanoid robots. To this end, we integrate biophysical characteristics of human arm-motions into sampling-based motion planning. We show the generality of our method by evaluating it with multiple manipulators. Our first contribution is to introduce a set of cost functions to optimize for human-like arm postures during collision-free motion planning. In a subsequent step, an optimization phase is used to improve the human-likeness of the initial path. Additionally, we present an interpolation approach for generating obstacle-aware and multi-modal velocity profiles. We thus generate collision-free and human-like motions in narrow passages while allowing for natural acceleration in free space.

@INPROCEEDINGS{9636068,
author={Gäbert, Carl and Kaden, Sascha and Thomas, Ulrike},
booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Generation of Human-like Arm Motions using Sampling-based Motion Planning},
year={2021},
volume={},
number={},
pages={2534-2541},
doi={10.1109/IROS51168.2021.9636068}
}


C. Bandi, U. Thomas
Skeleton-based Action Recognition for Human-Robot Interaction using Self-Attention Mechanism
Published in 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2021
DOI: 10.1109/FG52635.2021.9666948

Motion prediction and action recognition play an influential role in the enhancement of interactions between humans and robots. We aim to predict motions and recognize actions for an interaction-based supermarket assistance scenario. Skeleton-based prediction of human motion and action recognition methods gained a lot of attention with the help of recurrent neural networks, convolutional neural networks, and graph convolutions. For recognition of actions, most of the proposed architectures rely on the predefined structure of the skeleton. In this work, we introduce a new small-scale dataset with actions that are possible in a supermarket interaction scenario. we propose two different self-attention-based models for recognition of actions for learning long-range correlations that do not rely on a predefined skeleton structure. We evaluate the models with extensive experiments containing specific input feature encodings that enhances the motion or trajectory features for accurate prediction and recognition of actions. We validate the effectiveness of the models on the actions in supermarket dataset and a standard benchmark dataset for action recognition known as the NTU RGB+D dataset.

@INPROCEEDINGS{9666948,
author={Bandi, Chaitanya and Thomas, Ulrike},
booktitle={2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021)},
title={Skeleton-based Action Recognition for Human-Robot Interaction using Self-Attention Mechanism},
year={2021},
volume={},
number={},
pages={1-8},
doi={10.1109/FG52635.2021.9666948}
}


K. Kitz, U. Thomas
Neural dynamic assembly sequence planning
Published in IEEE 17th International Conference on Automation Science and Engineering (CASE), 2021
DOI: 10.1109/CASE49439.2021.9551620

The automatic generation of feasible assembly sequences from CAD-data is for several reasons a challenging task. One reason being that with increasing number of parts of an assembly group, the number of possible sequences grows exponentially, making an exhaustive search non-practical. We face this combinatorial problem by using Reinforcement Learning (Deep-Q-Learning) to approximate the cost-function of the assembly with an artificial neural network (ANN) and guide the search for an asymptotically optimal solution of this combinatorial problem. Assembly costs are calculated with a collision-based assembly-by-disassembly approach. The derived method is tested on assemblies of different sizes and types. The presented method provides collision-free assembly sequences very fast, due to its depth-first character and solves small and medium tasks reliably.

@inproceedings{kitz2021neural,
title={Neural dynamic assembly sequence planning},
author={Kitz, Kristof and Thomas, Ulrike},
booktitle={2021 IEEE 17th International Conference on Automation Science and Engineering (CASE)},
pages={2063--2068},
year={2021},
organization={IEEE}
}


S. Kaden, U. Thomas
Optimizing Mobility of Robotic Arms in Collision-free Motion Planning
Published in Journal of Intelligent & Robotic Systems 102.2, 2021
DOI: 10.1007/s10846-021-01407-0

A major task in motion planning is to find paths that have a high ability to react to external influences while ensuring a collision-free operation at any time. This flexibility is even more important in human-robot collaboration since unforeseen events can occur anytime. Such ability can be described as mobility, which is composed of two characteristics. First, the ability to manipulate, and second, the distance to joint limits. This mobility needs to be optimized while generating collision-free motions so that there is always the flexibility of the robot to evade dynamic obstacles in the future execution of generated paths. For this purpose, we present a Rapidly-exploring Random Tree (RRT), which applies additional costs and sampling methods to increase mobility. Additionally, we present two methods for the optimization of a generated path. Our first approach utilizes the built-in capabilities of the RRT*. The second method optimize the path with the stochastic trajectory optimization for motion planning (STOMP) approach with Gaussian Mixture Models. Moreover, we evaluate the algorithms in complex simulation and real environments and demonstrate an enhancement of mobility.

@article{kaden2021optimizing,
title={Optimizing Mobility of Robotic Arms in Collision-free Motion Planning},
author={Kaden, Sascha and Thomas, Ulrike},
journal={Journal of Intelligent \& Robotic Systems},
volume={102},
number={2},
pages={1--15},
year={2021},
publisher={Springer}
}


S. Ergun, Y. Ding, H. Alagi, C. Schöffmann, B. Ubezio, G. Soti, S. Mühlbacher-Karrer, M. Rathmair, U. Thomas, B. Hein, M. Hofbaur, H. Zangl
A Unified Perception Benchmark for Capacitive Proximity Sensing Towards Safe Human-Robot Collaboration (HRC)
Published in IEEE International Conference on Robotics and Automation (ICRA) , 2021
DOI: 10.1109/ICRA48506.2021.9561224

During the co-presence of human workers and robots, measures are required to avoid injuries from undesired contacts. Capacitive Proximity Sensors (CPSs) offer a cost-effective solution to cover the entire robot manipulator with fast close-range perception for HRC tasks, closing the perception gap between tactile detection and mid-range perception. CPSs do not suffer from occlusion and compared to pure tactile or force sensing, they react earlier and allow increasing the operating speed of Collaborative Robots (Cobots) while still maintaining safety. However, since capacitive coupling to obstacles varies with their distance, shape and material properties, the projection from capacitance to actual distances is a general problem. In this work, we propose an universal benchmark test procedure for fellow researchers to evaluate their CPSs. Considering ISO/TS 15066 for Power and Force Limiting (PFL) as a reference, we derive the requirements for the specified body regions and propose a method for determining the operation speed to comply with PFL based on a pre-defined detection threshold. Finally, the benchmark test procedure is evaluated on three different concepts of CPSs from the contributedresearchers, demonstrating the general applicability.

@INPROCEEDINGS{Ding2021b,
author={Ergun, Serkan and Ding, Yitao and Alagi, Hosam and Schöffmann, Christian and Ubezio, Barnaba and Soti, Gergely and Rathmair, Michael and Mühlbacher-Karrer, Stephan and Thomas, Ulrike and Hein, Björn and Hofbaur, Michael and Zangl, Hubert},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
title={A Unified Perception Benchmark for Capacitive Proximity Sensing Towards Safe Human-Robot Collaboration (HRC)},
year={2021},
volume={},
number={},
pages={3634-3640},
doi={10.1109/ICRA48506.2021.9561224}
}


Y. Ding, U. Thomas
Improving Safety and Accuracy of Impedance Controlled Robot Manipulators with Proximity Perception and Proactive Impact Reactions
Published in IEEE International Conference on Robotics and Automation (ICRA) , 2021
DOI: 10.1109/ICRA48506.2021.9561025

We present a system which improves the safety and accuracy of impedance controlled robotic manipulators with proximity perception. Proximity servoed manipulators, which use proximity sensors attached to the robot's outer shell, have recently demonstrated robust collision avoidance abilities. Nevertheless, unwanted collisions cannot be avoided entirely. As a fallback safety mechanism, robots with joint force/torque sensing rely on impedance controllers for impact attenuation and compliant behavior. However, impedance controllers induce undesired deflections of the robot from its trajectory when it is not in contact. These deviations are more pronounced at soft configurations and when the robot grasps objects of unknown weight distribution, thus a compromise must be made between high positional accuracy and softness (safety). The proximity information allows the robot to react to anticipated impacts proactively for attenuation and damage reduction of unavoidable collisions, while still maintaining high accuracy during regular operation. This is achieved through variations of impedance parameters according to proximity measurements and motions towards safe joint configurations during the pre-impact phase.

@INPROCEEDINGS{Ding2021a,
author={Ding, Yitao and Thomas, Ulrike},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
title={Improving Safety and Accuracy of Impedance Controlled Robot Manipulators with Proximity Perception and Proactive Impact Reactions},
year={2021},
volume={},
number={},
pages={3816-3821},
doi={10.1109/ICRA48506.2021.9561025}
}

Y. Ding, H. Kisner, U. Thomas
Using Machine Learning for Material Detection with Capacitive Proximity Sensors
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
DOI: 10.1109/IROS45743.2020.9341016

The ability of detecting materials plays an important role in robotic applications. The robot can incorporate the information from contactless material detection and adapt its behavior in how it grasps an object or how it walks on specific surfaces. In this, paper we apply machine learning on impedance spectra from capacitive proximity sensors for material detection. The unique spectra of certain materials only differ slightly and are subject to noise and scaling effects during each measurement. A best-fit classification approach to pre-recorded data is therefore inaccurate. We perform classification on ten different materials and evaluate different classification algorithms ranging from simple k-NN approaches to artificial neural networks, which are able to extract the material specific information from the impedance spectra.

@INPROCEEDINGS{Ding2020b,
author={Y. {Ding} and H. {Kisner} and T. {Kong} and U. {Thomas}},
booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Using Machine Learning for Material Detection with Capacitive Proximity Sensors},
year={2020},
volume={},
number={},
pages={10424-10429},
doi={10.1109/IROS45743.2020.9341016}
}


J. Bonse , U. Thomas
A New Low Cost Three-Finger Gripper with Hybrid Control with Soft Touch Behavior
Accepted for the 52nd International Symposium on Robotics (ISR), 2020

Specialized grasping tools are commonly used to manipulate known objects in industrial environments. They are typically straightforward to work with, but limited in terms of possible applications. Developments in the fields of humanoid robotics, human-robot interaction, as well as a growing demand for flexibility in production, require more sophisticated grippers. Anthropomorphic grippers, like four or five finger hands, offer a great amount of dexterity at the expense of implementation efforts. In contrast, a simple solution to grasp a variety of unknown objects are soft-robotic grippers. They are usually controlled by a single, or only a few actuators. Their disadvantage is the lack of control over the pose, since they are heavily underactuated. Thus, we designed a low-cost three-finger hand attached with force sensors at the inner fingers and a camera in the palm. The hybrid control structure fuses force, position and proximity measurements in order to achieve soft touch behaviour.

@InProceedings{Zhu2020,
author = {Bonse, Julian and Thomas, Ulrike},
title = {A New Low Cost Three-Finger Gripper with Hybrid Control with Soft Tough Behavior},
booktitle = {Accepted at International Symposium on Robotics Research/Robotik 2020, Munich},
year = {to be published June 2020},
note = {Accepted},
editor = {},
}


H. Zhu , U. Thomas
A New Compliant Leg for the Humanoid Robot Forrest
Accepted for the 52nd International Symposium on Robotics (ISR), 2020, Suggested for Best Paper Award

This paper proposes a new design for a complaint leg used in a new biped robot. Compliant joints are the key for further developments of humanoid robots in the future. Compliant joints can protect mechanical systems and provide better dynamic performance. The new leg presented in this paper consists of a 1-DoF compliant knee and a 2-DoF compliant ankle.

@InProceedings{Zhu2020,
author = {Zhu, Hongxi and Thomas, Ulrike},
title = {A New Compliant Leg for the Humanoid Robot Forrest},
booktitle = {Accepted at International Symposium on Robotics Research/Robotik 2020, Munich},
year = {to be published June 2020},
note = {Accepted},
editor = {},
}


H. Kisner, M. Weissflog, U. Thomas
Using a 6D Pose Estimation to Generate Viewpoint Dependent Training Data for Deep Neural Networks
International Journal of Mechanics and Control (JoMaC), 2020

The prediction of accurate 6D poses is necessary for various automated systems. Many model- based detection pipelines use hand-crafted feature detectors. They generate nearly exact object poses, but include computational costly search algorithms. Deep learning algorithms overcome search processes by using pre-trained neural networks. State-of-the-art methods are able to accurately predict object instances in 2D images. However, their training data need to be adapted to a specific task respectively environment. Incomplete training data lead to inaccurate or false predictions. The generation of datasets is tedious and time-consuming, thus open-source datasets are often used to increase the amount of training data. Mixing different datasets may lead to unnormalized distributions of objects and instances, which negatively affect the learning process. Therefore, this paper introduces a new automated approach to generate training data in new environments where simultaneously the data is evaluated with regard to normalized distributions. The here proposed approach generates concised training datasets while reducing redundancy. The method combines 6D pose estimation and object instance prediction. It is evaluated in real- world scenarios.

@Article{Kisner2020,
author = {Kisner, Hannes and Weissflog, Markus and Thomas, Ulrike},
title = {Using a 6D Pose Estimation to Generate Viewpoint Dependent Training Data for Deep Neural Networks},
journal = {International Journal of Mechanics and Control (JoMaC)},
year = {2020},
pages = {13-22},
}


Y. Ding, U. Thomas
Collision Avoidance with Proximity Servoing for Redundant Serial Robot Manipulators
Published in IEEE International Conference on Robotics and Automation (ICRA), 2020
DOI: 10.1109/ICRA40945.2020.9196759

Collision avoidance is a key technology towards safe human-robot interaction, especially on-line and fast-reacting motions are required. Skins with proximity sensors mounted on the robot's outer shell provide an interesting approach to occlusion-free and low-latency perception. However, collision avoidance algorithms which make extensive use of these properties for fast-reacting motions have not yet been fully investigated. We present an improved collision avoidance algorithm for proximity sensing skins by formulating a quadratic optimization problem with inequality constraints to compute instantaneous optimal joint velocities. Compared to common repulsive force methods, our algorithm confines the approach velocity to obstacles and keeps motions pointing away from obstacles unrestricted. Since with repulsive motions the robot only moves in one direction, opposite to obstacles, our approach has better exploitation of the redundancy space to maintain the task motion and gets stuck less likely in local minima. Furthermore, our method incorporates an active behaviour for avoiding obstacles and evaluates all potentially colliding obstacles for the whole arm, rather than just the single nearest obstacle. We demonstrate the effectiveness of our method with simulations and on real robot manipulators in comparison with commonly used repulsive force methods and our prior proposed approach.

@INPROCEEDINGS{Ding2020a,
author={Y. {Ding} and U. {Thomas}},
booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
title={Collision Avoidance with Proximity Servoing for Redundant Serial Robot Manipulators},
year={2020},
volume={},
number={},
doi={10.1109/ICRA40945.2020.9196759},
pages={10249-10255},}


C. Bandi, U. Thomas
Regression-based 3D Hand Pose Estimation using Heatmaps
In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020), Pages 636-643 ISBN: 978-989-758-402-2, Valletta, Malta, 2020
DOI: 10.5220/0008973206360643

3D hand pose estimation is a challenging problem in human-machine interaction applications. We introduce a simple and effective approach for 3D hand pose estimation in grasping scenarios taking advantage of a low-cost RGB-D camera. 3D hand pose estimation plays a major role in an environment where objects are handed over between the human and robot hand to avoid collisions and to collaborate in shared workspaces. We consider Convolutional Neural Networks (CNNs) to determine a solution to our challenge. The idea of cascaded CNNs is very appropriate for real-time applications. In the paper, we introduce an architecture for direct 3D normalized coordinates regression and a small-scale dataset for human-machine interaction applications. In a cascaded network, the first network minimizes the search space, then the second network is trained within the confined region to detect more accurate 2D heatmaps of finger’s joint locations. Finally, 3D normalized joints are regressed directly on RGB images and depth maps can lift normalized coordinates to camera coordinates.

@conference{visapp20,
author={Chaitanya Bandi. and Ulrike Thomas.},
title={Regression-based 3D Hand Pose Estimation using Heatmaps},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2020},
pages={636-643},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008973206360643},
isbn={978-989-758-402-2},
}

C. Nissler
Environment- and Self-Modeling through Camera-Based Pose Estimation
Dissertation, 2019, Dezember
ISBN: 978-3-8440-7048-4
URL: Link

Environments, in which robots can assist humans both in production tasks as well as in everyday tasks, will demand advanced capabilities of these robotic systems of cooperating with humans and other robots. To achieve this, robots should be able to navigate and manipulate in dynamic environments safely. As such, it is essential that a robot can accurately determine its pose (i.e., its position and orientation) in the environment based on optical sensors. However, both the map of the robot's surrounding as well as its sensors can contain inaccuracies, which can cause problematic consequences. The work presented here focuses on this issue by introducing several novel computer vision-based methods. These approaches lead to a set of challenges which are addressed in this book. These are: How accurately can a robot estimate its pose in a known environment, i.e., assuming that a precise map of its surrounding is available? Secondly, how can a model of the robot's surroundings be created if no map of its surroundings is known a priori? Lastly, how can this be done if neither a priori environment models nor models of the robot's internal state are available? The introduced methods are experimentally evaluated throughout this book employing different mobile robotic systems, ranging from industrial manipulators to humanoid robots. Going beyond traditional robotics, this work examines how the presented methods can also be applied to human-machine interaction. It shows, that by solely visually observing the movement of the muscles in the human forearm and by employing machine learning methods, the corresponding hand gestures can be determined, opening entirely new possibilities in the control of robotic hands and hand prostheses.

@book{dlr132978,
volume = {2},
month = {December},
author = {Christian Nissler},
series = {Fortschritte der Robtik / Progress in Robotics},
editor = {Ulrike Thomas},
title = {Environment- and Self-Modeling through Camera-Based Pose Estimation},
publisher = {Shaker Verlag},
year = {2019},
keywords = {Pose Estimation; Calibration; Camera-Camera Calibration; Localization; Hand-Eye Calibration},
url = {https://elib.dlr.de/132978/},
}


S. Kaden, U. Thomas
Maximizing Robot Manipulability along Paths in Collision-free Motion Planning
Published in 19th International Conference on Advanced Robotics (ICAR) , 2019
DOI: 10.1109/ICAR46387.2019.8981591

A major task in motion planning is to find suitable movements with large manipulability, while collision-free operation must be guaranteed. This condition is increasingly important in the collaboration between humans and robots, as the capability of avoidance to humans or dynamic obstacles must be ensured anytime. For this purpose, paths in motion planning have to be optimized with respect to manipulability and distance to obstacles. Because with a large manipulability the robot has at any time, the possibility of evading due to the greater freedom of movement. Alternatively, the robot can be pushed away by using a Cartesian impedance control. To achieve this, we have developed a combined approach. First, we introduce a Rapidly-exploring Random Tree, which is extended and optimized by state costs for manipulability. Secondly, we perform an optimization using the STOMP method and Gaussian Mixture Models. With this dual approach we are able to find paths in narrow passages and simultaneously optimize the path in terms of manipulability.

@INPROCEEDINGS{8981591,
author={S. {Kaden} and U. {Thomas}},
booktitle={2019 19th International Conference on Advanced Robotics (ICAR)},
title={Maximizing Robot Manipulability along Paths in Collision-free Motion Planning},
year={2019},
volume={},
number={},
pages={105-110},
keywords={},
doi={10.1109/ICAR46387.2019.8981591},
ISSN={null},
month={Dec},
}


F. Müller, J. Jäkel, J. Suchý, U. Thomas
Stability of Nonlinear Time-Delay Systems Describing Human-Robot Interaction
Published in IEEE/ASME Transaction on Mechatronics, 2019
DOI: 10.1109/TMECH.2019.2939907

In this paper, we present sufficient conditions for the stability analysis of a stationary point for a special type of nonlinear time-delay systems. These conditions are suitable for analyzing systems describing physical human-robot interaction (pHRI). For this stability analysis a new human model consisting of passive and active elements is introduced and validated. The stability conditions describe parametrization bounds for the human model and an impedance controller. The results of this paper are compared to stability conditions based on passivity, approximated time-delays and to numerical approaches. As a result of the comparison, it is shown that our conditions are more general than the Passivity condition of Colgate [1]. This includes the consideration of negative stiffness and nonlinear virtual environments. As an example, a pHRI including a nonlinear virtual environment with a polynomial structure is introduced and also successfully analyzed.These theoretical results could be used in the design of robust controllers and stability observers in pHRI.

@ARTICLE{8851257,
author={F. {Müller} and J. {Jäkel} and J. {Suchý} and U. {Thomas}},
journal={IEEE/ASME Transactions on Mechatronics},
title={Stability of Nonlinear Time-Delay Systems Describing Human-Robot Interaction},
year={2019},
volume={},
number={},
pages={1-1},
keywords={physical human-robot interaction;nonlinear timedelay systems;Lyapunov-Krasovskii functional;impedance control},
doi={10.1109/TMECH.2019.2939907},
Print ISSN={1083-4435},
Online ISSN={1941-014X},
month={},}


Y. Ding, F. Wilhelm, U. Thomas
3D Pose Estimation of Proximity Sensors with Self-Measurement for Calibration
Proceedings of the 2nd Workshop on Proximity Perception in Robotics at IROS 2019, Macau, China
DOI: 10.5445/IR/1000105220

The increasing number of sensing modules in a proximity servoing system for robotic applications requires new calibration methods. An exact pose calibration is essential for a correct obstacle detection. In this paper, we present a method for locating single proximity sensors on the surface of a robot based on the sensor’s measurements of its environment, including self measurements of the robot. The algorithm relies on stochastic sampling methods to minimize the error between measured proximity data and simulated data by altering the poses of the simulated sensors. The simulation uses a virtual 3D reproduction of the robot and its environment.

@proceedings{2019_1000105220,
editor = {Alagi, Hosam and Mühlbacher-Karrer, Stephan and {Escaida Navarro}, Stefan and Hein, Björn and Zangl, Hubert and Koyama, Keisuke},
year = {2019},
title = {Proceedings of the 2nd Workshop on Proximity Perception in Robotics at IROS 2019, Macau, China},
eventtitle = {2nd Workshop on Proximity Perception in Robotics at IROS},
eventtitleaddon = {2019},
eventdate = {2019-11-08/},
venue = {Macao, Macao},
publisher = {{KIT, Karlsruhe}},
pagetotal = {14},
url = {https://www.proxelsandtaxels.org/en/},
language = {english}
}


Y. Ding, F. Wilhelm, L. Faulhammer, U. Thomas
With Proximity Servoing towards Safe Human-Robot-Interaction
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), At Macau, 2019
DOI: 10.1109/IROS40897.2019.8968438

In this paper, we present a serial kinematic robot manipulator equipped with multimodal proximity sensing modules not only on the TCP but distributed on the robot's surface. The combination of close distance proximity information from capacitive and time-of-flight (ToF) measurements allows the robot to perform safe reflex-like and collision-free movements in a changing environment, e.g. where humans and robots share the same workspace. Our methods rely on proximity data and combine different strategies to calculate optimal collision avoidance vectors which are fed directly into the motion controller (proximity servoing). The strategies are prioritized, firstly to avoid collision and then secondly to constrain the movement within the null-space if kinematic redundancy is available. The movement is then optimized for fastest avoidance, best manipulability, and smallest end-effector velocity deviation. We compare our methods with common force field based methods.

@INPROCEEDINGS{8968438,
author={Y. {Ding} and F. {Wilhelm} and L. {Faulhammer} and U. {Thomas}},
booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={With Proximity Servoing towards Safe Human-Robot-Interaction},
year={2019},
volume={},
number={},
pages={4907-4912},
keywords={},
doi={10.1109/IROS40897.2019.8968438},
ISSN={2153-0858},
month={Nov},
}


H. Zhu, U. Thomas
A New Design of a Variable Stiffness Joint
2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)
DOI: 10.1109/AIM.2019.8868648

Soft or compliant robots are the key to safe interaction between humans and robots. To protect humans and robots from impact and to adapt to different tasks, researchers have developed many different variable stiffness joints, which include springs and can adjust stiffnesses between soft and rigid. The lever and the cam disc are two popular methods, that have been applied to many variable stiffness joints. This paper presents a new variable stiffness joint. This joint uses these two popular methods, to combine the both advantages, and overcome their disadvantages. This paper introduces the mechanical design and model of the new variable stiffness joint. The functionality is demonstrated by a prototype, and results are also reported here.

@INPROCEEDINGS{8868648,
author={H. {Zhu} and U. {Thomas}},
booktitle={2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)},
title={A New Design of a Variable Stiffness Joint},
year={2019},
volume={},
number={},
pages={223-228},
keywords={control system synthesis;human-robot interaction;variable stiffness joint;compliant robots;soft robots;Springs;Torque;Mathematical model;Shafts;Robot sensing systems;Force},
doi={10.1109/AIM.2019.8868648},
ISSN={2159-6247},
month={July},}


C. M. Costa, G. Veiga, A. Sousa, L. Rocha, A. A. Sousa, R. Rodrigues, U. Thomas
Modeling of video projectors in OpenGL for implementing a spacial augmented reality teaching system for assembly operations
Published in IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2019
DOI: 10.1109/ICARSC.2019.8733617

Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator

@INPROCEEDINGS{8733617,
author={C. M. {Costal} and G. {Veiga} and A. {Sousa} and L. {Rocha} and A. A. {Sousa} and R. {Rodrigues} and U. {Thomas}},
booktitle={2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)},
title={Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations},
year={2019},
volume={},
number={},
pages={1-8},
keywords={augmented reality;computer aided instruction;pose estimation;rendering (computer graphics);teaching;intuitive reality teaching application;projection hardware;video projectors;spatial augmented reality teaching system;assembly operations;teaching complex assembly;maintenance skills;human operators;extensive reading;training period;human supervision;immersive teaching system;inexperienced operators;textual instructions;video instructions;teaching steps;video playback;bare hands natural interaction interface;inspection phase;expected 3D outline;augmented reality teaching application;3D rendered content;6 DoF pose estimation;Mathematical model;Three-dimensional displays;Education;Cameras;Matrix converters;Solid modeling;Robots},
doi={10.1109/ICARSC.2019.8733617},
ISSN={},
month={April},}


A. C. Perzylo, B. Kahl, M. Rickert, N. Somani, Ch. Lehman, A. Kuss, S. Profanier, A. B. Beck, M. Haage, M. A. Roa, O. Sornmo, S. Gestegard Robertz, U. Thomas, G. Veiga, E. A. Topp, I. Kessler, M. Ganzer
SMErobotics - Smart Robots for Flexible Manufacturing
Published in IEEE Robotics & Automation Magazine ( Volume: 26 , Issue: 1 , March 2019 ) at 28th International Conference on Robotics and Automation, 2019
DOI: 10.1109/MRA.2018.2879747

Current market demands require an increasingly agile production environment throughout many manufacturing branches. Traditional automation systems and industrial robots, on the other hand, are often too inflexible to provide an economically viable business case for companies with rapidly changing products. The introduction of cognitive abilities into robotic and automation systems is, therefore, a necessary step toward lean changeover and seamless human-robot collaboration.

@ARTICLE{8601323,
author={A. {Perzylo} and M. {Rickert} and B. {Kahl} and N. {Somani} and C. {Lehmann} and A. {Kuss} and S. {Profanter} and A. B. {Beck} and M. {Haage} and M. {Rath Hansen} and M. T. {Nibe} and M. A. {Roa} and O. {Sornmo} and S. {Gestegard Robertz} and U. {Thomas} and G. {Veiga} and E. A. {Topp} and I. {Kesslar} and M. {Danzer}},
journal={IEEE Robotics Automation Magazine},
title={SMErobotics: Smart Robots for Flexible Manufacturing},
year={2019},
volume={26},
number={1},
pages={78-90},
keywords={agile manufacturing;control engineering computing;flexible manufacturing systems;human-robot interaction;industrial robots;robotic-automation systems;agile production environment;human-robot collaboration;industrial robots;flexible manufacturing;smart robots;Service robots;Automation;Production;Investment;Human-robot interaction},
doi={10.1109/MRA.2018.2879747},
ISSN={1070-9932},
month={March},}


H. Kisner, T. Schreiter, U. Thomas
Learning to Predict 2D Object Instances by Applying Model-Based 6D Pose Estimation
28th International Conference on Robotics in Alpe-Adria-Danube Region, 2019, 2nd Best Student Paper Award
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 980)
DOI: 10.1007/978-3-030-19648-6_57

Object detection and pose estimation still are very challenging tasks for robots. One common problem for many processing pipelines is the big amount of object data, e.g. often it is not known beforehand how many objects and which object classes can occur in the surrounding environment of a robot. Especially model-based object detection pipelines often focus on a few different object classes. However, deep learning algorithms were developed in the last years. They are able to handle a big amount of data and can easily distinguish between different object classes. The drawback is the high amount of training data needed. In general, both approaches have different advantages and disadvantages. Thus, this paper presents a new way to combine them in order to be able to estimate 6D poses for a higher amount of different object classes.

@InProceedings{10.1007/978-3-030-19648-6_57,
author="Kisner, Hannes and Schreiter, Tim and Thomas, Ulrike",
editor="Berns, Karsten and G{\"o}rges, Daniel",
title="Learning to Predict 2D Object Instances by Applying Model-Based 6D Pose Estimation",
booktitle="Advances in Service and Industrial Robotics",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="496--504",
isbn="978-3-030-19648-6"
}


A. C. Perzylo, B. Kahl, M. Rickert, N. Somani, Ch. Lehman, A. Kuss, S. Profanier, A. B. Beck, M. Haage, M. A. Roa, O. Sornmo, S. Gestegard Robertz,U. Thomas, G. Veiga, E. A. Topp, I. Kessler, M. Ganzer
SMErobotics - Smart Robots for Flexible Manufacturing
IEEE Robotics and Automation Magazine, 04 January 2019
DOI: 10.1109/mra.2018.2879747

Current market demands require an increasingly agile production environment throughout many manufacturing branches. Traditional automation systems and industrial robots, on the other hand, are often too inflexible to provide an economically viable business case for companies with rapidly changing products. The introduction of cognitive abilities into robotic and automation systems is, therefore, a necessary step toward lean changeover and seamless human–robot collaboration. In this article, we introduce the European Union (EU)-funded research project SMErobotics, which focuses on facilitating the use of robot systems in small and medium-sized enterprises (SMEs). We analyze open challenges for this target audience and develop multiple efficient technologies to address related issues. Real-world demonstrators of several end users and from multiple application domains show the impact these smart robots can have on SMEs. This article intends to give a broad overview of the research conducted in SMErobotics. Specific details of individual topics are provided through references to our previous publications.

@ARTICLE{8601323,
author={A. Perzylo and M. Rickert and B. Kahl and N. Somani and C. Lehmann and A. Kuss and S. Profanter and A. B. Beck and M. Haage and M. R. Hansen and M. Roa-Garzon and O. Sornmo and S. Gestegard Robertz and U. Thomas and G. Veiga and E. A. Topp and I. Kessler and M. Danzer},
journal={IEEE Robotics Automation Magazine},
title={SMErobotics: Smart Robots for Flexible Manufacturing},
year={2019},
volume={},
number={},
pages={1-1},
keywords={Service robots;Automation;Production;Investment;Tools},
doi={10.1109/MRA.2018.2879747},
ISSN={1070-9932},
month={},}


R. Ramalingame, A. Lakshmanan, F. Müller, U. Thomas, O. Kanoun
Highly sensitive capacitive pressure sensors for robotic applications based on carbon nanotubes and PDMS polymer nanocomposite
International Journal of Sensors and Sensor Systems, Ausg. 8, S. 87 – 94, Februar 2019
DOI: 10.5194/jsss-8-87-2019

Flexible tactile pressure sensor arrays based on multiwalled carbon nanotubes (MWCNT) and polydimethylsiloxane (PDMS) are gaining importance, especially in the field of robotics because of the high demand for stable, flexible and sensitive sensors. Some existing concepts of pressure sensors based on nanocomposites exhibit complicated fabrication techniques and better sensitivity than the conventional pressure sensors. In this article, we propose a nanocomposite-based pressure sensor that exhibits a high sensitivity of 25 % N−1, starting with a minimum load range of 0–0.01 N and 46.8 % N−1 in the range of 0–1 N. The maximum pressure sensing range of the sensor is approximately 570 kPa. A concept of a 4×3 tactile sensor array, which could be integrated to robot fingers, is demonstrated. The high sensitivity of the pressure sensor enables precision grasping, with the ability to sense small objects with a size of 5 mm and a weight of 1 g. Another application of the pressure sensor is demonstrated as a gait analysis for humanoid robots. The pressure sensor is integrated under the foot of a humanoid robot to monitor and evaluate the gait of the robot, which provides insights for optimizing the robot's self-balancing algorithm in order to maintain the posture while walking.

@Article{jsss-8-87-2019,
AUTHOR = {Ramalingame, R. and Lakshmanan, A. and M\"uller, F. and Thomas, U. and Kanoun, O.},
TITLE = {Highly sensitive capacitive pressure sensors for robotic applications based on carbon nanotubes and PDMS polymer nanocomposite},
JOURNAL = {Journal of Sensors and Sensor Systems},
VOLUME = {8},
YEAR = {2019},
NUMBER = {1},
PAGES = {87--94},
URL = {https://www.j-sens-sens-syst.net/8/87/2019/},
DOI = {10.5194/jsss-8-87-2019}
}


O. Lorenz, U. Thomas
Real Time Eye Gaze Tracking System using CNN-based Facial Features for Human Attention Measurement
In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, 598-606, 2019, Prague, Czech Republic
ISBN: 978-989-758-354-4

Understanding human attentions in various interactive scenarios is an important task for human-robot collaboration. Human communication with robots includes intuitive nonverbal behaviour body postures and gestures. Multiple communication channels can be used to obtain a understandable interaction between humans and robots. Usually, humans communicate in the direction of eye gaze and head orientation. In this paper, a new tracking system based on two cascaded CNNs is presented for eye gaze and head orientation tracking and enables robots to measure the willingness of humans to interact via eye contacts and eye gaze orientations. Based on the two consecutively cascaded CNNs, facial features are recognised, at first in the face and then in the regions of eyes. These features are detected by a geometrical method and deliver the orientation of the head to determine eye gaze direction. Our method allows to distinguish between front faces and side faces. With a consecutive approach for each condition, the eye gaze is also detected under extreme situations. The applied CNNs have been trained by many different datasets and annotations, thereby the reliability and accuracy of the here introduced tracking system is improved and outperforms previous detection algorithm. Our system is applied on commonly used RGB-D images and implemented on a GPU to achieve real time performance. The evaluation shows that our approach operates accurately in challenging dynamic environments.

@conference{visapp19,
author={Oliver Lorenz. and Ulrike Thomas.},
title={Real Time Eye Gaze Tracking System using CNN-based Facial Features for Human Attention Measurement},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2019},
pages={598-606},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007565305980606},
isbn={978-989-758-354-4},
}

F. Müller
Assistierende virtuelle Kraftfelder bei handgeführten Robotern
Dissertation, 2018, Dezember
ISBN: 978-3-8440-6424-7

Handgeführte Schwerlastroboter werden in der Industrie eingesetzt, um Arbeitern beim Heben von schweren Lasten zu unterstützen. Diese Technologie ordnet sich in das Gesamtkonzept der Mensch-Roboter-Interaktion (MRI) ein, bei welchem sich Mensch und Roboter einen gemeinsamen Arbeitsraum teilen. Ziel der vorliegenden Arbeit ist es, die Bedienung solcher Roboter für den Nutzer zu vereinfachen und intuitiver zu gestalten. Zu diesem Zweck wurden die assistierenden Kraftfelder entwickelt, deren Algorithmus aus einer Lern- und einer Anwendungsphase besteht. In der Lernphase werden die Bewegungsdaten von erfahrenen Arbeitern innerhalb einer speziellen Arbeitsaufgabe aufgezeichnet. Aus diesen Daten wird in der Anwendungsphase ein virtuelles Kraftfeld generiert, welches den Nutzer auf die Pfade der erfahrenen Arbeiter leitet. Es wurden drei verschiedene assistierende Kraftfelder entwickelt: das tunnelförmige virtuelle Kraftfeld (TKF), das assistierende virtuelle Kraftfeld (AKF) und das AKF für anthropomorphe Roboterarme. Das TKF beeinflusst den Endeffektor des Roboters und eignet sich für alle Robotertypen. Das AKF ist eine Erweiterung des TKF und beeinflusst sowohl die Position als auch die Orientierung des Endeffektors. Dieses Kraftfeld wird eingesetzt, um die Nutzer der oben angesprochenen industriellen Schwerlastroboter zu unterstützen. Um dieses Kraftfeld für die in der MRI weit verbreiteten Leichtbauroboter zugänglich zu machen, wurde es für den Einsatz mit anthropomorphen Roboterarmen angepasst. Zusätzlich wurde die kraftfeldabhängige variable Impedanzregelung (KF-VIR) vorgestellt. Aufgrund der nichtlinearen Rückkopplung des Kraftfeldes und die durch die Reaktionszeit bedingte zeitverzögerte Rückkopplung des Menschen ist eine Stabilitätsbetrachtung des Gesamtsystems, bestehend aus Roboter, Mensch und Kraftfeld, notwendig. Für das Menschmodell wurden verschieden Ansätze mit aktiven und passiven Parametern sowie einer Reaktionszeit/Totzeit vorgestellt. Diese wurden mit in das Gesamtsystem integriert. Die resultierenden Gesamtsysteme wurden mit unterschiedlichen Methoden auf Stabilität geprüft. Zwei dieser Methoden wurden in der vorliegenden Arbeit basierend auf dem Ljapunow-Krasovskii-Funktional entwickelt und dienen zur analytischen Untersuchung von polynomialen Totzeitsystemen. Um zusätzlich Anwendungsfälle mit mehreren Nutzern betrachten zu können, wurden Modelle und Methoden entsprechend angepasst und ebenfalls untersucht. Aus all diesen Untersuchungen resultierten unter anderem konservative analytische Stabilitätsgrenzen im Parameterraum. Mit Hilfe von Simulationsstudien und anschließenden experimentellen Validierungen wurden verschiedene Parametrierungseinstellungen des AKF und des KF-VIR untersucht. Daraus leiteten sich Parametrierungsrichtlinien für spätere Anwender ab. Um zu untersuchen, ob sich die Bedienung eines im Gelenkraum geregelten handgeführten Roboter durch den Einsatz des AKF verbessert, wurden eine Nutzerstudie unter Laborbedingungen mit 42\ Probanden und eine praxisorientierte Nutzerstudie mit 24 Probanden durchgeführt. Bei den Versuchen mit AKF reduzierte sich die Fehleranzahl der Probanden im Schnitt um die Hälfte. Des Weiteren zeigten die Ergebnisse bezüglich Versuchsdauer, Arbeitsbelastung und Nutzerkomfort ebenfalls signifikante Verbesserungen mit großen Effekten.

@phdthesis{...,
author = {Florian M{\"u}ller},
title = {Assistierende virtuelle Kraftfelder bei handgeführten Robotern},
school={Technische Universität Chemnitz},
year = {2018},
publisher = {Shaker Verlag},
isbn= {978-3-8440-6424-7 },
month={Dezember},
type={Dissertation}
}


F. Müller, J. Jäkel, U. Thomas
Stability Analysis for a Passive/Active Human Model in Physical Human-Robot Interaction with Multiple Users
International Journal of Control, VOL. 93, NO. 9, PP. 2104–2119, Aug. 2020
DOI: 10.1080/00207179.2018.1541508

Human-Robot-Man Interaction (HRH), understood as a physical human-robot interaction (pHRI) with two humans, can be applied when lifting heavy, bulky and large-sized objects with a robot. In combination with a virtual environment, this system can become non-linear. In this article we prove sufficient stability conditions for a stationary point of such a particular type of non-linear multiple time-delay systems. In addition, a new human model consisting of a passive and an active part will be introduced and validated on experimental data. The derived stability conditions are applied to a single-user pHRI system including this human model. The results indicate that these conditions are very conservative. Then four approaches for the analysis of a multi-user pHRI are introduced and compared with each other. Finally, a potential HRH application with a nonlinear environment in the form of a potential force field is presented.

@article{doi:10.1080/00207179.2018.1541508,
author = {Florian M{\"u}ller and Jens J{\"a}kel and Ulrike Thomas},
title = {Stability analysis for a passive/active human model in physical human–robot interaction with multiple users},
journal = {International Journal of Control},
volume = {93},
number = {9},
pages = {2104-2119},
year = {2020},
publisher = {Taylor & Francis},
month={Aug},
doi = {10.1080/00207179.2018.1541508},
URL = { https://doi.org/10.1080/00207179.2018.1541508 },
eprint = { https://doi.org/10.1080/00207179.2018.1541508 }
}


H. Zhu, U. Thomas
Ein elastisches Gelenk
Angemeldetes Patent, Deutsches Patentamt: 10 2018 008 378.1, 22.10.2018

n/a

n/a


F. Müller, J. Janetzky, U. Bernd, J. Jäkel, U. Thomas
User Force-Dependent Variable Impedance Control in Human-Robot-Interaction
IEEE International Conference on Automation, Science and Engineering (CASE), Munich, Germany, August 2018, S. 1328 - 1335
DOI: 10.1109/COASE.2018.8560340

In this paper a novel type of variable impedance control (VIC) is presented. The controller adjusts the impedance depending on the force input of the user. In this way it is easy to accelerate and decelerate. Additionally, for high velocity the damping decreases and vice versa. This approach could be interpreted as a combination of acceleration-dependent VIC and velocity-dependent VIC. To guarantee stability, a stability observer is introduced. The observer is based on a model which describes a combined passive and active behavior of the user. In addition, we present a user study with 45 participants where the differences between VIC, VIC with stability observer and a pure admittance controller were investigated. The results show an improvement of the VIC with stability observer in relation to the pure admittance controller among different categories. With both the variable impedance controller and the variable impedance controller with stability observer, the participants significantly improved their times in comparison to the pure admittance controller, while they maintained the same level of precision. Also the workload was considerably smaller and the user comfort increased with both controllers compared to the usage of the pure admittance controller.

@INPROCEEDINGS{8560340,
author={F. {Müller} and J. {Janetzky} and U. {Behrnd} and J. {Jäkel} and U. {Thomas}},
booktitle={2018 IEEE 14th International Conference on Automation Science and Engineering (CASE)},
title={User Force-Dependent Variable Impedance Control in Human-Robot Interaction},
year={2018},
volume={},
number={},
pages={1328-1335},
keywords={damping;human-robot interaction;stability;user force-dependent variable impedance control;acceleration-dependent VIC;velocity-dependent VIC;stability observer;pure admittance controller;variable impedance controller;user comfort},
doi={10.1109/COASE.2018.8560340},
ISSN={2161-8070},
month={Aug},
}


C.M. Costa, G. Veiga, A. Sousa, L. Rocha, U. Thomas
Automatic Planning of Disassembly Sequences 3D Geometric Reasoning with Information Extraction from Natural Language Instruction Manuals
2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
DOI: 10.1109/ICARSC.2018.8374185

Planning the optimal assembly and disassembly sequence plays a critical role when optimizing the production, maintenance and recycling of products. For tackling this problem, a recursive branch-and-bound algorithm was developed for finding the optimal disassembly plan. It takes into consideration the traveling distance of a robotic end effector along with a cost penalty when it needs to be changed. The precedences and part decoupling directions are automatically computed in the proposed geometric reasoning engine by analyzing the spatial relationships present in SolidWorks assemblies. For accelerating the optimization process, a best-first search algorithm was implemented for quickly finding an initial disassembly sequence solution that is used as an upper bound for pruning most of the non-optimal tree branches. For speeding up the search further, a caching technique was developed for reusing feasible disassembly operations computed on previous search steps, reducing the computational time by more than 18%. As a final stage, our SolidWorks add-in generates an exploded view animation for allowing intuitive analysis of the best solution found. For testing our approach, the disassembly of two starter motors and a single cylinder engine was performed for assessing the capabilities and time requirements of our algorithms.

@INPROCEEDINGS{8374185,
author={C. M. {Costa} and G. {Veiga} and A. {Sousa} and L. {Rocha} and E. {Oliveira} and H. L. {Cardoso} and U. {Thomas}},
booktitle={2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)},
title={Automatic generation of disassembly sequences and exploded views from solidworks symbolic geometric relationships},
year={2018},
volume={},
number={},
pages={211-218},
keywords={assembly planning;computer animation;control engineering computing;design for disassembly;end effectors;optimisation;production engineering computing;recycling;solid modelling;tree searching;disassembly sequences;robotic end effector;Solidworks symbolic geometric relationships;production planning;branch-and-bound algorithm;best-first search algorithm;caching technique;single cylinder engine;Solid modeling;Robots;Three-dimensional displays;Engines;Design automation;Recycling;Planning},
doi={10.1109/ICARSC.2018.8374185},
ISSN={null},
month={April},
}


Y. Ding, U. Thomas
A New Capacitive Proximity Sensor for Detecting Ground-Isolated Objects
Proceedings of the 1st Workshop on Proximity Perception in Robotics at IROS 2018, Madrid, Spain, p.7-8
DOI: 10.5445/IR/1000088104

In this work, we provide a new measurement method for detecting ground-isolated objects with capacitive sensors. Capacitive sensors find use in sensor skins for safety applications in robotics, where they serve as proximity sensors for proximity servoing. The sensors measure the electric current caused by the capacitive coupling and changing electric field between the sensor electrode and the target. However, these sensors require a return path for the current back to the sensor in order to provide a reference potential, otherwise the targets are electrically floating and not detectable. Our approach allows us to avoid this return path by creating a virtual reference potential in the target with differential signals. We provide experimental results to show the effectiveness of our method compared to state-of-the-art measurement methods.

@inproceedings{Ding2018,
author = {Y. Ding and U. Thomas},
title = {A New Capacitive Proximity Sensor for Detecting Ground-Isolated Objects},
booktitle = {Proceedings of the 1st Workshop on Proximity Perception in Robotics at IROS 2018, Madrid, Spain},
doi = {10.5445/IR/1000088104},
pages = {7-8},
year = {2018},
month = {Aug}
}


Y. Ding, H. Zhang, U. Thomas
Capacitive Proximity Sensor Skin for Contactless Material Detection
IEEE/RSJ International Conference on Intelligent Robotics and Systems, Madrid, Spain 2018
DOI: 10.1109/IROS.2018.8594376

In this paper, we present a method for contactless material detection with capacitive proximity sensing skins. The described sensor element measures proximity with a capacitance based sensor and absolute distance based on time-of-flight (ToF). Attached on a robot, we gain information about the robot’s near field environment. Our new approach extends the current proximity and distance sensing methods and measures the characteristic impedance spectrum of an object to obtain material properties. By this, we gain further material information besides of the near field information in a contactless and non-destructive way. The information is important not only for human-machine-interaction, but also for grasping and manipulation. We evaluate our method with measurements of numerous different materials and present a solution to differentiate between them.

@INPROCEEDINGS{8594376,
author={Y. {Ding} and H. {Zhang} and U. {Thomas}},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Capacitive Proximity Sensor Skin for Contactless Material Detection},
year={2018},
volume={},
number={},
pages={7179-7184},
keywords={capacitance measurement;capacitive sensors;distance measurement;electric impedance measurement;frequency measurement;signal processing;time-of-flight sensors;capacitance based sensor system;characteristic impedance spectrum measurement;absolute distance based capacitance measurement capabilities;ToF sensors;human-machine-interaction;signal processing;frequency based capacitance measurement capabilities;distance sensing methods;capacitive proximity sensing skins;contactless material detection;Robot sensing systems;Impedance;Frequency measurement;Electrodes;Current measurement;Impedance measurement},
doi={10.1109/IROS.2018.8594376},
ISSN={2153-0858},
month={Oct},
}


H. Kisner, U. Thomas
Efficient Object Pose Estimation in 3D Point Clouds using Sparse Hash-Maps and Point-Pair Features
50th International Symposium on Robotics (ISR 2018), Munich, Germany
Print ISBN: 978-3-8007-4699-6

This paper presents an image processing pipeline for object pose estimation (3D translation and rotation) in 3D point clouds. In comparison to the state of the art algorithms, the presented approach uses sparse hash-maps in order to reduce the number of hypotheses and the computational costs as early as possible. The image processing pipeline first starts with spectral clustering to estimate object clusters. Then the sparse hash-maps from point-pair features are used to generate hypotheses for each object. After that, each hypothesis is evaluated by considering the visual appearance (shape and colour) with a quality function which returns a comparable confidence value for every hypotheses. The pipeline is able to detect partially occluded and fully visible objects. The proposed approach is evaluated with online available 3D datasets.

@INPROCEEDINGS{8470594,
author={H. {Kisner} and U. {Thomas}},
booktitle={ISR 2018; 50th International Symposium on Robotics},
title={Efficient Object Pose Estimation in 3D Point Clouds using Sparse Hash-Maps and Point-Pair Features},
year={2018},
volume={},
number={},
pages={1-7},
keywords={},
doi={},
ISSN={null},
month={June},
}


T. Ebinger, S. Kaden, S. Thomas, R. Andre, N. Amato, U. Thomas
A General and Flexible Search Framework for Disassembly Planning
IEEE International Conference on Robotics and Automation, Brisbane, Australia, 2018
DOI: 10.1109/ICRA.2018.8460483

In this paper we present a new general framework for disassembly sequence planning. This framework is a flexible method for the complete disassembly of an object; versatile in its nature allowing different types of search schemes (exhaustive vs. preemptive), various part separation techniques, and the ability to group parts, or not, into subassemblies to improve the solution efficiency and parallelism. This gives the new ability to approach the disassembly sequence planning problem in a truly hierarchical way. We demonstrate two different search strategies using the framework that can either yield a single solution quickly or provide a spectrum of solutions from which an optimal may be selected. We also develop a method for subassembly identification based on collision information. Our results show improved performance over an iterative motion planning based method for finding a single solution and greater functionality through hierarchical planning and optimal solution search.

@INPROCEEDINGS{8460483,
author={T. {Ebinger} and S. {Kaden} and S. {Thomas} and R. {Andre} and N. M. {Amato} and U. {Thomas}},
booktitle={2018 IEEE International Conference on Robotics and Automation (ICRA)},
title={A General and Flexible Search Framework for Disassembly Planning},
year={2018},
volume={},
number={},
pages={3548-3555},
keywords={assembly planning;design for disassembly;iterative methods;search problems;iterative motion planning;collision information;subassembly identification;preemptive scheme;exhaustive scheme;search strategies;hierarchical approach;disassembly sequence planning;parallelism;part separation techniques;Planning;Trajectory;Measurement;Data structures;Search problems;Learning systems;Containers},
doi={10.1109/ICRA.2018.8460483},
ISSN={2577-087X},
month={May},
}


C. Costa, G. Veiga, A. Sousa, L. Rocha, E Oliveira, H. Cardoso, U. Thomas
Automatic Generation of Disassembly Sequences and Exploded Views from SolidWorks Symbolic Geometric Relationships
ICARSC -2018, Portugal, 2018
DOI: 10.1109/ICARSC.2018.8374185

Planning the optimal assembly and disassembly sequence plays a critical role when optimizing the production, maintenance and recycling of products. For tackling this problem, a recursive branch-and-bound algorithm was developed for finding the optimal disassembly plan. It takes into consideration the traveling distance of a robotic end effector along with a cost penalty when it needs to be changed. The precedences and part decoupling directions are automatically computed in the proposed geometric reasoning engine by analyzing the spatial relationships present in SolidWorks assemblies. For accelerating the optimization process, a best-first search algorithm was implemented for quickly finding an initial disassembly sequence solution that is used as an upper bound for pruning most of the non-optimal tree branches. For speeding up the search further, a caching technique was developed for reusing feasible disassembly operations computed on previous search steps, reducing the computational time by more than 18%. As a final stage, our SolidWorks add-in generates an exploded view animation for allowing intuitive analysis of the best solution found. For testing our approach, the disassembly of two starter motors and a single cylinder engine was performed for assessing the capabilities and time requirements of our algorithms.

@INPROCEEDINGS{8374185,
author={C. M. Costa and G. Veiga and A. Sousa and L. Rocha and E. Oliveira and H. L. Cardoso and U. Thomas},
booktitle={2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)},
title={Automatic generation of disassembly sequences and exploded views from solidworks symbolic geometric relationships},
year={2018},
volume={},
number={},
pages={211-218},
keywords={assembly planning;computer animation;control engineering computing;design for disassembly;end effectors;optimisation;production engineering computing;recycling;solid modelling;tree searching;disassembly sequences;robotic end effector;Solidworks symbolic geometric relationships;production planning;branch-and-bound algorithm;best-first search algorithm;caching technique;single cylinder engine;Solid modeling;Robots;Three-dimensional displays;Engines;Design automation;Recycling;Planning},
doi={10.1109/ICARSC.2018.8374185},
ISSN={},
month={April},}


H. Kisner, U. Thomas
Segmentation of 3D Point Clouds Using a New Spectral Clustering Algorithm Without a-Priori Knowledge
In 13th International Conference on Computer Vision Theory and Applications, Madeira, Portugal 27-29 January, 2018
DOI: 10.5220/0006549303150322

For many applications like pose estimation it is important to obtain good segmentation results as a preprocessing step. Spectral clustering is an efficient method to achieve high quality results without a priori knowledge about the scene. Among other methods, it is either the k-means based spectral clustering approach or the bi-spectral clustering approach, which are suitable for 3D point clouds. In this paper, a new method is introduced and the results are compared to these well-known spectral clustering algorithms. When implementing the spectral clustering methods key issues are: how to define similarity, how to build the graph Laplacian and how to choose the number of clusters without any or less a-priori knowledge. The suggested spectral clustering approach is described and evaluated with 3D point clouds. The advantage of this approach is that no a-priori knowledge about the number of clusters is necessary and not even the number of clusters or the number of objects need to be known. With this approach high quality segmentation results are achieved.

@conference{visapp18,
author={Hannes Kisner and Ulrike Thomas},
title={Segmentation of 3D Point Clouds using a New Spectral Clustering Algorithm Without a-priori Knowledge},
booktitle={Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2018},
pages={315-322},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006549303150322},
isbn={978-989-758-290-5}, }


Y. Ding, J. Bonse, R. Andre, U. Thomas
In-hand grasp pose estimation using particle filters in combination with haptic rendering models
International Journal of Humanoid Robotics, Jan, 2018
DOI: 10.1142/S0219843618500020

Specialized grippers used in the industry are often restricted to specific tasks and objects. However, with the development of dexterous grippers, such as humanoid hands, in-hand pose estimation becomes crucial for successful manipulations, since objects will change their pose during and after the grasping process. In this paper, we present a gripping system and describe a new pose estimation algorithm based on tactile sensory information in combination with haptic rendering models (HRMs). We use a 3-finger manipulator equipped with tactile force sensing elements. A particle filter processes the tactile measurements from these sensor elements to estimate the grasp pose of an object. The algorithm evaluates hypotheses of grasp poses by comparing tactile measurements and expected tactile information from CAD-based haptic renderings, where distance values between the sensor and 3D-model are converted to forces. Our approach compares the force distribution instead of absolute forces or distance values of each taxel. The haptic rendering models of the objects allow us to estimate the pose of soft deformable objects. In comparison to mesh-based approaches, our algorithm reduces the calculation complexity and recognizes ambiguous and geometrically impossible solutions.

@article{doi:10.1142/S0219843618500020,
author={Ding, Yitao and Bonse, Julian and Andre, Robert and Thomas, Ulrike},
title={In-Hand Grasping Pose Estimation Using Particle Filters in Combination with Haptic Rendering Models},
journal={International Journal of Humanoid Robotics},
volume={15},
number={01},
pages={1850002},
year={2018},
doi={10.1142/S0219843618500020},
URL={https://www.worldscientific.com/doi/abs/10.1142/S0219843618500020},
eprint={https://www.worldscientific.com/doi/pdf/10.1142/S0219843618500020},
}

U.Thomas, R. Andre, O. Lorenz
Kooperierender Autonomer Roboter in der Montage
Herbstkonferenz Gesellschaft für Arbeitswissenschaften e.V., Chemnitz, 2017

n/a

@InProceedings{Thomas:2017,
author = {Thomas, Ulrike and Andre, Robert and Lorenz, Oliver},
title = {Kooperierender {Autonomer} {Roboter} in der {Montage}},
booktitle = {Dokumentation der Herbstkonferenz - Fokus Mensch im Maschinen- und Fahrzeugbau 4.0},
date = {2017},
location = {Dortmund},
}


F. Müller, F. Weiske, J. Jäkel, U. Thomas, J. Suchý
Human-Robot Interaction with Redundant Robots Using Force-Field-Dependent Variable Impedance Control
in proceedings of IEEE International Symposium on Robotics and Intelligent Sensors, Ottawa, Canada, S. 166-172, 2017, Finalist for Best Paper Award
DOI: 10.1109/IRIS.2017.8250116

This paper introduces an improvement of the assisting force field (AFF) concept for hand-guiding of robotic arms. The AFF guides the user to several reference paths previously learned from experienced users. The AFF concept is extended to anthropomorphic redundant robots, which are used to obtain more flexibility. The redundancy of the robot is used for collision avoidance with the robot's elbow. The motion for collision avoidance should have a low influence on position and orientation of the end effector. A corresponding algorithm is proposed. Using AFF, a force-field-dependent variable impedance controller (FF-VIC) is developed for reducing the settling time and improving the user comfort. For investigating these proposed developments a simulation study was performed in which user comfort and control performance were evaluated. Analyzing the simulation results, a suitable parametrization for the FF-VIC can be found which improves user comfort and settling time. Finally, the results were experimentally validated and the functionality of the collision avoidance is shown.

@INPROCEEDINGS{MuellerJaekel2017b,
author={M{\"u}ller, F. and Weiske, F. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
booktitle={5th IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Ottawa},
title={Human-Robot Interaction with Redundant Robots Using Force-Field-Dependent Variable Impedance},
year={2017},
pages ={166 -- 172},
month={Oktober}}


C. Nissler, Z.-C. Marton, H. Kisner, R. Triebel, U. Thomas
A method for hand-eye and camera-camera calibration in case of limited fields of view
in proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, Canada, 2017
DOI: 10.1109/IROS.2017.8206478

In classical robot-camera calibration, a 6D transformation between the camera frame and the local frame of a robot is estimated by first observing a known calibration object from a number of different view points and then finding transformation parameters that minimize the reprojection error. The disadvantage with this is that often not all configurations can be reached by the end-effector, which leads to an inaccurate parameter estimation. Therefore, we propose a more versatile method based on the detection of oriented visual features, in our case AprilTags. From a collected number of such detections during a defined rotation of a joint, we fit a Bingham distribution by maximizing the observation likelihood of the detected orientations. After a tilt and a second rotation, a camera-to-joint transformation can be determined. In experiments with accurate ground truth available, we evaluate our approach in terms of precision and robustness, both for hand-eye/robot-camera and for cameracamera calibration, with classical solutions serving as a baseline.

@INPROCEEDINGS{8206478,
author={C. Nissler and Z. C. Márton and H. Kisner and U. Thomas and R. Triebel},
booktitle={2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={A method for hand-eye and camera-to-camera calibration for limited fields of view},
year={2017},
volume={},
number={},
pages={5868-5873},
keywords={calibration;cameras;end effectors;parameter estimation;robot vision;camera frame;camera-to-camera calibration;camera-to-joint transformation;end-effector;inaccurate parameter estimation;local frame;observation likelihood;oriented visual features;reprojection error;Calibration;Cameras;Robot kinematics;Robot vision systems;Three-dimensional displays},
doi={10.1109/IROS.2017.8206478},
ISSN={},
month={Sept},}


R. Andre, U. Thomas
Error robust efficient assembly sequence planning with haptic rendering models for rigid and non-rigid assemblies
in proceeding of IEEE International Conference on Robotics and Automation, Singapore, May 29 - June 3, 2017
DOI: 10.1109/ICRA.2017.8262698

This paper presents a new approach for error robust assembly sequence planning which uses haptic ren- dering models (HRMs) for the representation of assemblies. Our assembly planning system uses HRMs for collision test along mating vectors, which are generated by stereographic projection. The planner stores the vectors in 2 2 1 D distance maps providing fast and efficient access for the later evaluation while AND/OR-graphs contain possible sequences. Haptic rendering models facilitate the processing compared to faulty triangle meshes providing fast and geometry independent collision tests as colliding parts can easily be identified and handled accordingly. In addition, part and material related properties can be annotated. We present a fast and simple approach handling approximation inconsistencies, which occur due to discretization errors, based only on the properties of the haptic rendering models. The paper concludes with feasible results for various assemblies and detailed calculation times underlining the effectiveness of our approach.

@INPROCEEDINGS{8262698,
author={R. Andre and U. Thomas},
booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
title={Error robust and efficient assembly sequence planning with haptic rendering models for rigid and non-rigid assemblies},
year={2017},
volume={},
number={},
pages={1-7},
keywords={assembly planning;computational geometry;graph theory;haptic interfaces;mesh generation;production engineering computing;rendering (computer graphics);2 1/2D distance maps;HRMs;approximation inconsistencies;discretization errors;error robust assembly sequence planning;fast geometry independent collision tests;haptic rendering models;material related properties;nonrigid assemblies;rigid assemblies;Haptic interfaces;Planning;Rendering (computer graphics);Robots;Robustness;Solid modeling;Three-dimensional displays},
doi={10.1109/ICRA.2017.8262698},
ISSN={},
month={May},}

F. Müller, J. Jäkel, U. Thomas, J. Suchý
Intuitive Handführung von Robotern als Handlingsysteme
at - Automatisierungstechnik, Vol. 64 Nr. 10, Oktober 2016
DOI: 10.1515/auto-2016-0057

In hand-guided robot-based handling systems the user controls the movement by a force/moment sensor. Exact movement control of up to six degrees of freedom demands much experience. The article describes an approach which improves the usability by means of virtual force fields. To derive rules for the parametrization of the force fields we analyse the stability of the impedance controlled robot and additionally use simulation and experiments.

@article{MuellerJaekel2016a,
author={M{\"u}ller, F. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
year = {2016},
title = {{Intuitive Handf{\"u}hrung von Robotern als Handlingsysteme}},
journal={at - Automatisierungstechnik},
volume={64},
number={10},
month={Oktober},
pages={806 -- 815}
}


F. Müller, N. M. Fischer, J. Jäkel, U. Thomas, J. Suchý
User study for hand-guided robots with assisting force fields
1st IFAC Conference on Cyber-Physical & Human-Systems, Vol. 49 Nr. 32, Florianopolis, Brazil, Dezember 2016
DOI: 10.1016/j.ifacol.2016.12.222

In this paper we present an approach for improving the hand-guiding of robotic arms which is called assisting force field (AFF). The AFF guides the user to certain reference paths enabling the user to keep the desired position and orientation of the end effector. The reference paths are computed using learning data of experienced users. The AFF is realized by an impedance control of the robot. The main focus of this paper is to investigate how the AFF improves the handling of the robot. For this a user study has been performed with 42 participants. The experiments were complemented by questionnaires regarding user comfort and task workload. The results of the study show an obvious improvement on the performance and the ergonomic quantities applying the AFF.

@INPROCEEDINGS{MuellerJaekel2016b,
author={M{\"u}ller, F. and Fischer, N. M. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
booktitle={1st Conference on Cyber-Physical \& Human-Systems (CPHS) },
title={User study for hand-guided robots with assisting force fields},
pages={246 -- 251},
year={2016},
volume={49},
number={32},
month={Dezember}
}


C. Nissler, Z. Marton, U. Thomas
Evaluation and Improvement of Global Pose Estimation with Multiple AprilTags for Industrial Manipulators
ETFA 2016 - IEEE International Conference on Emerging Technology & Factory Automation Berlin, Germany, September 6 - 9, 2016
DOI: 10.1109/ETFA.2016.7733711

Given the advancing importance for light-weight production materials an increase in automation is crucial. This paper presents a prototypical setup to obtain a precise pose estimation for an industrial manipulator in a realistic production environment. We show the achievable precision using only a standard fiducial marker system (AprilTag) and a state-of-the art camera attached to the robot. The results obtained in a typical working space of a robot cell of about 4.5m × 4.5m are in the range of 15mm to 35mm compared to ground truth provided by a laser tracker. We then show several methods of reducing this error by applying state-of-the-art optimization techniques, which reduce the error significantly to less than 10mm compared to the laser tracker ground truth data and at the same time remove e×isting outliers.

@INPROCEEDINGS{7733711,
author={C. Nissler and S. B×üttner and Z. C. Marton and L. Beckmann and U. Thomasy},
booktitle={2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA)},
title={Evaluation and improvement of global pose estimation with multiple AprilTags for industrial manipulators},
year={2016},
pages={1-8},
keywords={industrial manipulators;pose estimation;production engineering computing;production materials;robot vision;error reduction;global pose estimation;ground truth;industrial manipulators;laser tracker;light-weight production materials;multiple AprilTags;optimization;production environment;robot cell working space;standard fiducial marker system;Cameras;End effectors;Lasers;Pose estimation;Robot vision systems;Service robots}, doi={10.1109/ETFA.2016.7733711},
month={Sept},
}


R. Andre, M. Jokesch, U. Thomas
Reliable Robot Assembly Using Haptic Rendering Models in Combination with Particle Filters
IEEE 12th Conference on Automation, Science and Engineering (CASE), Fort Worth, Texas, USA, August 2016
DOI: 10.1109/COASE.2016.7743532

In this paper we propose a method for reliable and error tolerant assembly for impedance controlled robots by using a particle filter based approach. Our method applies a haptic rendering model obtained from CAD data only, with which we are able to evaluate relative objects poses implemented as particles. The real world force torque sensor values are compared to the model based haptic rendering information to correct pose uncertainties during assembly. We make use of the KUKA LBR iiwa’s intrinsic sensors to measure the position and joint torques representing the real world state. The particle filter is required to compensate pose errors which exceed the assembly clearance during assembly. We show the usefulness of our approach by simulation and real world peg- in-hole tasks.

@INPROCEEDINGS{7743532,
author={R. Andre and M. Jokesch and U. Thomas},
booktitle={2016 IEEE International Conference on Automation Science and Engineering (CASE)},
title={Reliable robot assembly using haptic rendering models in combination with particle filters},
year={2016},
pages={1134-1139},
keywords={CAD;force measurement;force sensors;haptic interfaces;particle filtering (numerical methods);pose estimation;rendering (computer graphics);robotic assembly;torque measurement;CAD data;KUKA LBR iiwa intrinsic sensors;assembly clearance;error tolerant assembly;force torque sensor values;haptic rendering models;impedance controlled robots;joint torque measurement;object pose evaluation;particle filters;peg-in-hole tasks;pose error compensation;pose uncertainties;position measurement;reliable robot assembly;Force;Haptic interfaces;Robot sensing systems;Solid modeling;Surface treatment;Torque},
doi={10.1109/COASE.2016.7743532},
month={Aug},
}


K. Nottensteiner, T. Bodenmüller, M. Kasseker, M. Roa, D. Seidel, A. Stemmer, U. Thomas
A Complete Automated Chain For Flexible Assembly using Recognition, Planning and Sensor-Based Execution
Print ISBN: 978-3-8007-4231-8
Proceedings of 47th International Symposium on Robotics, Munich, June 2016

This paper presents a fully automated system for automatic assembly of aluminum profile constructions. This high grade of automation of the entire process chain requires novel strategies in recognition, planning and execution. The system includes an assembly sequence planner integrated with a grasp planning tool, a knowledge-based reasoning method, a skill-based code generation, and an error tolerant execution engine. The modular structure of the system allows its adaptation to new products, which can prove especially useful for SMEs producing small lot sizes. The system is robust and stable, as demonstrated with the repeated execution of different geometric assemblies.

@INPROCEEDINGS{7559140,
author={K. Nottensteiner and T. Bodenmueller and M. Kassecker and M. A. Roa and A. Stemmer and T. Stouraitis and D. Seidel and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={A Complete Automated Chain for Flexible Assembly using Recognition, Planning and Sensor-Based Execution},
year={2016},
pages={1-8},
month={June},
}


R. Andre, U. Thomas
Anytime Optimal Assembly Sequence Planning
Proceedings of 47th International Symposium on Robotics, Munich, June 2016
Print ISBN: 978-3-8007-4231-8

This paper describes an anytime optimization approach for assembly sequence planning. The well-known AND/OR- graph is applied to represent feasible assembly sequences. An optimal sequence is searched on the basis of this graph. Depending on multiple cost functions for each assembly step the first found plan might not be cost-optimal. Therefore the anytime approach allows finding the global cost-optimal sequence if the complete graph can be continuously parsed. In addition the returned solution can be re-evaluated at a later time allowing further optimizations in the case of changing production environments. The approach has been evaluated with different CAD-models each with varying graph sizes and assembly step costs.

@INPROCEEDINGS{7559139,
author={R. Andre and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={Anytime Assembly Sequence Planning},
year={2016},
pages={1-8},
month={June},
}


A. Kolker, M. Jokesch, U. Thomas
An Optical Tactile Sensor for Measuring Force Values and Directions for Several Soft and Rigid Contacts
Proceeding of 47th International Symposium on Robotics, Munich, June 2016
Print ISBN: 978-3-8007-4231-8

The implementation of robots to manipulate with soft or fragile objects requires usage of high sensible tactile sensors. For many applications, beside the force magnitude, the direction is also important. This paper extends already available ideas and implementations of 3D-tactile sensors. Our sensor can detect a wide range of forces, the direction of forces and shifting forces along the sensor surface for several contact points simultaneously. The amount of capabilities in a single sensor is unique. The underlying concept is a pressure-to-light system. A camera provides images of a structure, which generates geometric shapes on the images according to external acting forces. The shapes are well convenient for image processing and give the ability to use them as reference for the forces. After describing our approach very detailed we show experiments for evaluation, e.g. applying it to grab objects carefully. Finally the future work is discussed, where we plan to bring the sensor to anthropomorphic robot hands.

@INPROCEEDINGS{7559098,
author={A. Kolker and M. Jokesch and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={An Optical Tactile Sensor for Measuring Force Values and Directions for Several Soft and Rigid Contacts},
year={2016},
pages={1-6},
month={June},
}

M. Jokesch, J. Suchý, A. Winkler, A. Foss, U. Thomas
Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin-Alignments with Impedance Controlled Robots
ROBOT2015 - Second Iberian Conference on Robotics, Special Session on Future Industrial Robotic Systems, Lissabon, Portugal, 2015
DOI: 10.1007/978-3-319-27149-1_9

In this paper, a generic algorithm for peg-in-hole assembly tasks is suggested. It is applied in the project GINKO were the aim is to connect electric vehicles with charging stations automatically. This paper explains an algorithm applicable for peg-in-hole tasks by means of Cartesian impedance controlled robots. The plugging task is a specialized peg-in-hole task for which 7 pins have to be aligned simultaneously and the peg and the hole have asymmetric shapes. In addition significant forces are required for complete insertion. The initial position is inaccurately estimated by a vision system. Hence, there are translational and rotational uncertainties between the plug, carried by the robot and the socket, situated on the E-car. To compensate these errors three different steps of Cartesian impedance control are performed. To verify our approach we evaluated the algorithm from many different start positions.

@Inbook{Jokesch2016,
author="Jokesch, Michael and Such{\'y}, Jozef and Winkler, Alexander and Fross, Andr{\'e} and Thomas, Ulrike",
editor="Reis, Lu{\'i}s Paulo and Moreira, Ant{\'o}nio Paulo and Lima, Pedro U. and Montano, Luis and Mu{\~{n}}oz-Martinez, Victor",
title="Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin Alignments with Impedance Controlled Robots ",
bookTitle="Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Volume 2",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="105--117",
isbn="978-3-319-27149-1",
doi="10.1007/978-3-319-27149-1_9",
url="http://dx.doi.org/10.1007/978-3-319-27149-1_9"
}


A. Butting, B. Rumpe, C. Schulze, U. Thomas, A. Wortmann
Modeling Reusable Plattform Independent Robot Assembly Processes
Workshop Modeling in Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Workshop on Domain Specific Languages for Robotics, Hamburg, 2015
DOI: arXiv:1601.02452

Smart factories that allow flexible production of highly individualized goods require flexible robots, usable in efficient assembly lines. Compliant robots can work safely in shared environments with domain experts, who have to program such robots easily for arbitrary tasks. We propose a new domain-specific language and toolchain for robot assembly tasks for compliant manipulators. With the LightRocks toolchain, assembly tasks are modeled on different levels of abstraction, allowing a separation of concerns between domain experts and robotics experts: externally provided, platform-independent assembly plans are instantiated by the domain experts using models of processes and tasks. Tasks are comprised of skills, which combine platform-specific action models provided by robotics experts. Thereby it supports a flexible production and re-use of modeling artifacts for various assembly processes.

@article{DBLP:journals/corr/ButtingRSTW16,
author = {Arvid Butting and Bernhard Rumpe and Christoph Schulze and Ulrike Thomas and Andreas Wortmann},
title = {Modeling Reusable, Platform-Independent Robot Assembly Processes},
journal = {CoRR},
volume = {abs/1601.02452},
year = {2016},
url = {http://arxiv.org/abs/1601.02452},
archivePrefix = {arXiv},
eprint = {1601.02452},
timestamp = {Mon, 13 Aug 2018 16:47:16 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/ButtingRSTW16},
bibsource = {dblp computer science bibliography, https://dblp.org}
}


U. Thomas, T. Stouraitis, M. A. Roa
Flexible Assembly through Integrated Assembly Sequence Planning and Grasp Planning
in Proceedings of IEEE International Conference on Automation Science and Engineering, Gothenburgh, Sweden, 2015
DOI: 10.1109/CoASE.2015.7294142

This paper describes an assembly sequence planner able to generate feasible sequences for building a desired assembly. The assembly planner takes geometrical, physical and mechanical constraints into account. Moreover, the planner considers the feasibility of grasps during the planning process and takes into account work-cell specific constraints. The approach uses AND/OR-graphs for planning. The generation of such graphs is implemented by using a specialized graph cut algorithm that employs a dynamically changing priority queue. These graphs are further evaluated by considering the feasibility of grasping sub-assemblies and individual parts during the process. The grasp and the sequence planner are generic, hence the proposed solution can be applied to arbitrary assemblies of rigid parts. The system has been evaluated with different configurations obtained by the combination of standard item-profiles.

@INPROCEEDINGS{7294142,
author={U. Thomas and T. Stouraitis and M. A. Roa},
booktitle={2015 IEEE International Conference on Automation Science and Engineering (CASE)},
title={Flexible assembly through integrated assembly sequence planning and grasp planning},
year={2015},
pages={586-592}, keywords={assembly planning;computer aided production planning;graph colouring;shear modulus;AND-graphs;OR-graphs;arbitrary assemblies;assembly sequence planner;dynamically changing priority queue;geometrical constraints;graph generation;grasp planning;mechanical constraints;physical constraints;planning process;rigid parts;specialized graph cut algorithm;work-cell specific constraints;Assembly;Databases;Fasteners;Force;Grasping;Planning;Robots}, doi={10.1109/CoASE.2015.7294142},
ISSN={2161-8070},
month={Aug},
}


K. Nilsson, B. Rumpe, U. Thomas, A. Wortmann
1st Workshop on Model-Driven Knowledge Engineering for Improved Software Modularity in Robotics and Automation
MDKE 2015, European Robotics Forum 2015
URL: Link

In domestic service robotic applications, complex tasks haveto be fulfilled in close collaboration with humans. We try to integratequalitative reasoning and human-robot interaction by bridging the gapin human and robot representations and by enabling the seamless inte-gration of human notions in the robot’s high-level control. The developedmethods can also be used to abstract away low-level details of specificrobot platforms. These low-level details often pose a problem in re-usingsoftware components and applying the same programs and methods indifferent contexts. When combined with methods for self-maintenancedeveloped earlier these abstractions also allow for seamlessly increasingthe robustness and resilience of different robotic systems with only littleeffort.

@ARTICLE {,
author = "Klas Nilsson Bernhard Rumpe Ulrike Thomas Andreas Wortmann",
title = "1st Workshop on Model-Driven Knowledge Engineering for Improved Software Modularity in Robotics and Automation (MDKE)",
journal = "RWTH Aachen, European Robotics Forum, Vienna (Austria)",
year = "13.03.2015",
volume = "vol. RWTH-2015-01968",
pages = "1-20" }