Navigation

Jump to main content
Robotics and Human Machine Interaction
Publications

Publikationen

Im folgenden sind unsere Veröffentlichungen, dem Jahr entsprechend, aufgelistet.

2019

F. Müller, J. Jäkel, J. Suchý, U. Thomas
Stability of Nonlinear Time-Delay Systems Describing Human-Robot Interaction
Published in IEEE/ASME Transaction on Mechatronics, 2019
DOI: 10.1109/TMECH.2019.2939907

In this paper, we present sufficient conditions for the stability analysis of a stationary point for a special type of nonlinear time-delay systems. These conditions are suitable for analyzing systems describing physical human-robot interaction (pHRI). For this stability analysis a new human model consisting of passive and active elements is introduced and validated. The stability conditions describe parametrization bounds for the human model and an impedance controller. The results of this paper are compared to stability conditions based on passivity, approximated time-delays and to numerical approaches. As a result of the comparison, it is shown that our conditions are more general than the Passivity condition of Colgate [1]. This includes the consideration of negative stiffness and nonlinear virtual environments. As an example, a pHRI including a nonlinear virtual environment with a polynomial structure is introduced and also successfully analyzed.These theoretical results could be used in the design of robust controllers and stability observers in pHRI.

@ARTICLE{8851257,
author={F. {Müller} and J. {Jäkel} and J. {Suchý} and U. {Thomas}},
journal={IEEE/ASME Transactions on Mechatronics},
title={Stability of Nonlinear Time-Delay Systems Describing Human-Robot Interaction},
year={2019},
volume={},
number={},
pages={1-1},
keywords={physical human-robot interaction;nonlinear timedelay systems;Lyapunov-Krasovskii functional;impedance control},
doi={10.1109/TMECH.2019.2939907},
Print ISSN={1083-4435},
Online ISSN={1941-014X},
month={},}


Y. Ding, F. Wilhelm, L. Faulhammer, U. Thomas
With Proximity Servoing towards Safe Human-Robot-Interaction
Accepted at RSJ/IEEE International Conference on Intelligent Robots and Systems,2019

In this paper, we present a serial kinematic robot manipulator equipped with multimodal proximity sensing modules not only on the TCP but distributed on the robot's surface. The combination of close distance proximity information from capacitive and time-of-flight (ToF) measurements allows the robot to perform safe reflex-like and collision-free movements in a changing environment, e.g. where humans and robots share the same workspace. Our methods rely on proximity data and combine different strategies to calculate optimal collision avoidance vectors which are fed directly into the motion controller (proximity servoing). The strategies are prioritized, firstly to avoid collision and then secondly to constrain the movement within the null-space if kinematic redundancy is available. The movement is then optimized for fastest avoidance, best manipulability, and smallest end-effector velocity deviation. We compare our methods with common force field based methods.

n/a


H. Zhu, U. Thomas
A New Design of a Variable Stiffness Joint
Accepted at IEEE/ASME International Conference on Advanced Intelligent Mechatronics,2019

Soft or compliant robots are the key to safe interaction between humans and robots. To protect humans and robots from impact and to adapt to different tasks, researchers have developed many different variable stiffness joints, which include springs and can adjust stiffnesses between soft and rigid. The lever and the cam disc are two popular methods, that have been applied to many variable stiffness joints. This paper presents a new variable stiffness joint. This joint uses these two popular methods, to combine the both advantages, and overcome their disadvantages. This paper introduces the mechanical design and model of the new variable stiffness joint. The functionality is demonstrated by a prototype, and results are also reported here.

n/a


C. M. Costa, G. Veiga, A. Sousa, L. Rocha, A. A. Sousa, R. Rodrigues, U. Thomas
Modeling of video projectors in OpenGL for implementing a spacial augmented reality teaching system for assembly operations
Published in IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2019
DOI: 10.1109/ICARSC.2019.8733617

Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator

@INPROCEEDINGS{8733617,
author={C. M. {Costal} and G. {Veiga} and A. {Sousa} and L. {Rocha} and A. A. {Sousa} and R. {Rodrigues} and U. {Thomas}},
booktitle={2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)},
title={Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations},
year={2019},
volume={},
number={},
pages={1-8},
keywords={augmented reality;computer aided instruction;pose estimation;rendering (computer graphics);teaching;intuitive reality teaching application;projection hardware;video projectors;spatial augmented reality teaching system;assembly operations;teaching complex assembly;maintenance skills;human operators;extensive reading;training period;human supervision;immersive teaching system;inexperienced operators;textual instructions;video instructions;teaching steps;video playback;bare hands natural interaction interface;inspection phase;expected 3D outline;augmented reality teaching application;3D rendered content;6 DoF pose estimation;Mathematical model;Three-dimensional displays;Education;Cameras;Matrix converters;Solid modeling;Robots},
doi={10.1109/ICARSC.2019.8733617},
ISSN={},
month={April},}


A. C. Perzylo, B. Kahl, M. Rickert, N. Somani, Ch. Lehman, A. Kuss, S. Profanier, A. B. Beck, M. Haage, M. A. Roa, O. Sornmo, S. Gestegard Robertz, U. Thomas, G. Veiga, E. A. Topp, I. Kessler, M. Ganzer
SMErobotics - Smart Robots for Flexible Manufacturing
Published in IEEE Robotics & Automation Magazine ( Volume: 26 , Issue: 1 , March 2019 ) at 28th International Conference on Robotics and Automation, 2019
DOI: 10.1109/MRA.2018.2879747

Current market demands require an increasingly agile production environment throughout many manufacturing branches. Traditional automation systems and industrial robots, on the other hand, are often too inflexible to provide an economically viable business case for companies with rapidly changing products. The introduction of cognitive abilities into robotic and automation systems is, therefore, a necessary step toward lean changeover and seamless human-robot collaboration.

@ARTICLE{8601323,
author={A. {Perzylo} and M. {Rickert} and B. {Kahl} and N. {Somani} and C. {Lehmann} and A. {Kuss} and S. {Profanter} and A. B. {Beck} and M. {Haage} and M. {Rath Hansen} and M. T. {Nibe} and M. A. {Roa} and O. {Sornmo} and S. {Gestegard Robertz} and U. {Thomas} and G. {Veiga} and E. A. {Topp} and I. {Kesslar} and M. {Danzer}},
journal={IEEE Robotics Automation Magazine},
title={SMErobotics: Smart Robots for Flexible Manufacturing},
year={2019},
volume={26},
number={1},
pages={78-90},
keywords={agile manufacturing;control engineering computing;flexible manufacturing systems;human-robot interaction;industrial robots;robotic-automation systems;agile production environment;human-robot collaboration;industrial robots;flexible manufacturing;smart robots;Service robots;Automation;Production;Investment;Human-robot interaction},
doi={10.1109/MRA.2018.2879747},
ISSN={1070-9932},
month={March},}


H. Kisner, T. Schreiter, U. Thomas
Learning to Predict 2D Object Instances by Applying Model-Based 6D Pose Estimation
28th International Conference on Robotics in Alpe-Adria-Danube Region, 2019, 2nd Best Student Paper Award
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 980)
DOI: 10.1007/978-3-030-19648-6_57

Object detection and pose estimation still are very challenging tasks for robots. One common problem for many processing pipelines is the big amount of object data, e.g. often it is not known beforehand how many objects and which object classes can occur in the surrounding environment of a robot. Especially model-based object detection pipelines often focus on a few different object classes. However, deep learning algorithms were developed in the last years. They are able to handle a big amount of data and can easily distinguish between different object classes. The drawback is the high amount of training data needed. In general, both approaches have different advantages and disadvantages. Thus, this paper presents a new way to combine them in order to be able to estimate 6D poses for a higher amount of different object classes.

@InProceedings{10.1007/978-3-030-19648-6_57,
author="Kisner, Hannes and Schreiter, Tim and Thomas, Ulrike",
editor="Berns, Karsten and G{\"o}rges, Daniel",
title="Learning to Predict 2D Object Instances by Applying Model-Based 6D Pose Estimation",
booktitle="Advances in Service and Industrial Robotics",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="496--504",
isbn="978-3-030-19648-6"
}


A. C. Perzylo, B. Kahl, M. Rickert, N. Somani, Ch. Lehman, A. Kuss, S. Profanier, A. B. Beck, M. Haage, M. A. Roa, O. Sornmo, S. Gestegard Robertz,U. Thomas, G. Veiga, E. A. Topp, I. Kessler, M. Ganzer
SMErobotics - Smart Robots for Flexible Manufacturing
IEEE Robotics and Automation Magazine, 04 January 2019
DOI: 10.1109/mra.2018.2879747

Current market demands require an increasingly agile production environment throughout many manufacturing branches. Traditional automation systems and industrial robots, on the other hand, are often too inflexible to provide an economically viable business case for companies with rapidly changing products. The introduction of cognitive abilities into robotic and automation systems is, therefore, a necessary step toward lean changeover and seamless human–robot collaboration. In this article, we introduce the European Union (EU)-funded research project SMErobotics, which focuses on facilitating the use of robot systems in small and medium-sized enterprises (SMEs). We analyze open challenges for this target audience and develop multiple efficient technologies to address related issues. Real-world demonstrators of several end users and from multiple application domains show the impact these smart robots can have on SMEs. This article intends to give a broad overview of the research conducted in SMErobotics. Specific details of individual topics are provided through references to our previous publications.

@ARTICLE{8601323,
author={A. Perzylo and M. Rickert and B. Kahl and N. Somani and C. Lehmann and A. Kuss and S. Profanter and A. B. Beck and M. Haage and M. R. Hansen and M. Roa-Garzon and O. Sornmo and S. Gestegard Robertz and U. Thomas and G. Veiga and E. A. Topp and I. Kessler and M. Danzer},
journal={IEEE Robotics Automation Magazine},
title={SMErobotics: Smart Robots for Flexible Manufacturing},
year={2019},
volume={},
number={},
pages={1-1},
keywords={Service robots;Automation;Production;Investment;Tools},
doi={10.1109/MRA.2018.2879747},
ISSN={1070-9932},
month={},}


R. Ramalingame, A. Lakshmanan, F. Müller, U. Thomas, O. Kanoun
Highly sensitive capacitive pressure sensors for robotic applications based on carbon nanotubes and PDMS polymer nanocomposite
International Journal of Sensors and Sensor Systems, Ausg. 8, S. 87 – 94, Februar 2019
DOI: 10.5194/jsss-8-87-2019

Flexible tactile pressure sensor arrays based on multiwalled carbon nanotubes (MWCNT) and polydimethylsiloxane (PDMS) are gaining importance, especially in the field of robotics because of the high demand for stable, flexible and sensitive sensors. Some existing concepts of pressure sensors based on nanocomposites exhibit complicated fabrication techniques and better sensitivity than the conventional pressure sensors. In this article, we propose a nanocomposite-based pressure sensor that exhibits a high sensitivity of 25 % N−1, starting with a minimum load range of 0–0.01 N and 46.8 % N−1 in the range of 0–1 N. The maximum pressure sensing range of the sensor is approximately 570 kPa. A concept of a 4×3 tactile sensor array, which could be integrated to robot fingers, is demonstrated. The high sensitivity of the pressure sensor enables precision grasping, with the ability to sense small objects with a size of 5 mm and a weight of 1 g. Another application of the pressure sensor is demonstrated as a gait analysis for humanoid robots. The pressure sensor is integrated under the foot of a humanoid robot to monitor and evaluate the gait of the robot, which provides insights for optimizing the robot's self-balancing algorithm in order to maintain the posture while walking.

@article{...,
author = {Rajarajan Ramalingame and Amoog Lakshmanan and Florian M{\"u}ller and Ulrike Thomas and Olfa Kanoun},
title = {Highly sensitive capacitive pressure sensors for robotic applications based on carbon nanotubes and PDMS polymer nanocomposite},
pages = {87 -- 94},
year = {2019},
month={Februar},
publisher = {Journal of Sensors and Sensor Systems},
doi = {https://doi.org/10.5194/jsss-8-87-2019 },
URL = { https://doi.org/10.5194/jsss-8-87-2019 },
eprint = { https://doi.org/10.1080/00207179.2018.1541508 }
}


O. Lorenz, U. Thomas
Real Time Eye Gaze Tracking System using CNN-based Facial Features for Human Attention Measurement
In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, 598-606, 2019, Prague, Czech Republic
ISBN: 978-989-758-354-4

Understanding human attentions in various interactive scenarios is an important task for human-robot collaboration. Human communication with robots includes intuitive nonverbal behaviour body postures and gestures. Multiple communication channels can be used to obtain a understandable interaction between humans and robots. Usually, humans communicate in the direction of eye gaze and head orientation. In this paper, a new tracking system based on two cascaded CNNs is presented for eye gaze and head orientation tracking and enables robots to measure the willingness of humans to interact via eye contacts and eye gaze orientations. Based on the two consecutively cascaded CNNs, facial features are recognised, at first in the face and then in the regions of eyes. These features are detected by a geometrical method and deliver the orientation of the head to determine eye gaze direction. Our method allows to distinguish between front faces and side faces. With a consecutive approach for each condition, the eye gaze is also detected under extreme situations. The applied CNNs have been trained by many different datasets and annotations, thereby the reliability and accuracy of the here introduced tracking system is improved and outperforms previous detection algorithm. Our system is applied on commonly used RGB-D images and implemented on a GPU to achieve real time performance. The evaluation shows that our approach operates accurately in challenging dynamic environments.

author={Oliver Lorenz. and Ulrike Thomas.},
title={Real Time Eye Gaze Tracking System using CNN-based Facial Features for Human Attention Measurement},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2019},
pages={598-606},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007565305980606},
isbn={978-989-758-354-4},
}

2018

F. Müller
Assistierende virtuelle Kraftfelder bei handgeführten Robotern
Dissertation, 2018, Dezember
ISBN: 978-3-8440-6424-7

Handgeführte Schwerlastroboter werden in der Industrie eingesetzt, um Arbeitern beim Heben von schweren Lasten zu unterstützen. Diese Technologie ordnet sich in das Gesamtkonzept der Mensch-Roboter-Interaktion (MRI) ein, bei welchem sich Mensch und Roboter einen gemeinsamen Arbeitsraum teilen. Ziel der vorliegenden Arbeit ist es, die Bedienung solcher Roboter für den Nutzer zu vereinfachen und intuitiver zu gestalten. Zu diesem Zweck wurden die assistierenden Kraftfelder entwickelt, deren Algorithmus aus einer Lern- und einer Anwendungsphase besteht. In der Lernphase werden die Bewegungsdaten von erfahrenen Arbeitern innerhalb einer speziellen Arbeitsaufgabe aufgezeichnet. Aus diesen Daten wird in der Anwendungsphase ein virtuelles Kraftfeld generiert, welches den Nutzer auf die Pfade der erfahrenen Arbeiter leitet. Es wurden drei verschiedene assistierende Kraftfelder entwickelt: das tunnelförmige virtuelle Kraftfeld (TKF), das assistierende virtuelle Kraftfeld (AKF) und das AKF für anthropomorphe Roboterarme. Das TKF beeinflusst den Endeffektor des Roboters und eignet sich für alle Robotertypen. Das AKF ist eine Erweiterung des TKF und beeinflusst sowohl die Position als auch die Orientierung des Endeffektors. Dieses Kraftfeld wird eingesetzt, um die Nutzer der oben angesprochenen industriellen Schwerlastroboter zu unterstützen. Um dieses Kraftfeld für die in der MRI weit verbreiteten Leichtbauroboter zugänglich zu machen, wurde es für den Einsatz mit anthropomorphen Roboterarmen angepasst. Zusätzlich wurde die kraftfeldabhängige variable Impedanzregelung (KF-VIR) vorgestellt. Aufgrund der nichtlinearen Rückkopplung des Kraftfeldes und die durch die Reaktionszeit bedingte zeitverzögerte Rückkopplung des Menschen ist eine Stabilitätsbetrachtung des Gesamtsystems, bestehend aus Roboter, Mensch und Kraftfeld, notwendig. Für das Menschmodell wurden verschieden Ansätze mit aktiven und passiven Parametern sowie einer Reaktionszeit/Totzeit vorgestellt. Diese wurden mit in das Gesamtsystem integriert. Die resultierenden Gesamtsysteme wurden mit unterschiedlichen Methoden auf Stabilität geprüft. Zwei dieser Methoden wurden in der vorliegenden Arbeit basierend auf dem Ljapunow-Krasovskii-Funktional entwickelt und dienen zur analytischen Untersuchung von polynomialen Totzeitsystemen. Um zusätzlich Anwendungsfälle mit mehreren Nutzern betrachten zu können, wurden Modelle und Methoden entsprechend angepasst und ebenfalls untersucht. Aus all diesen Untersuchungen resultierten unter anderem konservative analytische Stabilitätsgrenzen im Parameterraum. Mit Hilfe von Simulationsstudien und anschließenden experimentellen Validierungen wurden verschiedene Parametrierungseinstellungen des AKF und des KF-VIR untersucht. Daraus leiteten sich Parametrierungsrichtlinien für spätere Anwender ab. Um zu untersuchen, ob sich die Bedienung eines im Gelenkraum geregelten handgeführten Roboter durch den Einsatz des AKF verbessert, wurden eine Nutzerstudie unter Laborbedingungen mit 42\ Probanden und eine praxisorientierte Nutzerstudie mit 24 Probanden durchgeführt. Bei den Versuchen mit AKF reduzierte sich die Fehleranzahl der Probanden im Schnitt um die Hälfte. Des Weiteren zeigten die Ergebnisse bezüglich Versuchsdauer, Arbeitsbelastung und Nutzerkomfort ebenfalls signifikante Verbesserungen mit großen Effekten.

@phdthesis{...,
author = {Florian M{\"u}ller},
title = {Assistierende virtuelle Kraftfelder bei handgeführten Robotern},
school={Technische Universität Chemnitz},
year = {2018},
publisher = {Shaker Verlag},
isbn= {978-3-8440-6424-7 },
month={Dezember},
type={Dissertation}
}


F. Müller, J. Jäkel, U. Thomas
Stability Analysis for a Passive/Active Human Model in Physical Human-Robot Interaction with Multiple Users
International Journal of Control, Oct. 2018
DOI: 10.1080/00207179.2018.1541508

Human-Robot-Man Interaction (HRH), understood as a physical human-robot interaction (pHRI) with two humans, can be applied when lifting heavy, bulky and large-sized objects with a robot. In combination with a virtual environment, this system can become non-linear. In this article we prove sufficient stability conditions for a stationary point of such a particular type of non-linear multiple time-delay systems. In addition, a new human model consisting of a passive and an active part will be introduced and validated on experimental data. The derived stability conditions are applied to a single-user pHRI system including this human model. The results indicate that these conditions are very conservative. Then four approaches for the analysis of a multi-user pHRI are introduced and compared with each other. Finally, a potential HRH application with a nonlinear environment in the form of a potential force field is presented.

@article{doi:10.1080/00207179.2018.1541508,
author = {Florian M{\"u}ller and Jens J{\"a}kel and Ulrike Thomas},
title = {Stability analysis for a passive/active human model in physical human–robot interaction with multiple users},
journal = {International Journal of Control},
volume = {0},
number = {0},
pages = {1-16},
year = {2018},
publisher = {Taylor & Francis},
doi = {10.1080/00207179.2018.1541508},
URL = { https://doi.org/10.1080/00207179.2018.1541508 },
eprint = { https://doi.org/10.1080/00207179.2018.1541508 }
}


H. Zhu, U. Thomas
Ein elastisches Gelenk
Angemeldetes Patent, Deutsches Patentamt: 10 2018 008 378.1, 22.10.2018

n/a

n/a


Y. Ding, U. Thomas
Einrichtung zur Bestimmung der elektrischen Kapazität
Angemeldetes Patent, Deutsches Patentamt: 10 2018 003 268.0, 19.4.2018

n/a

n/a


F. Müller, J. Janetzky, U. Bernd, J. Jäkel, U. Thomas
User Force-Dependent Variable Impedance Control in Human-Robot-Interaction
IEEE International Conference on Automation, Science and Engineering (CASE), Munich, Germany, August 2018, S. 1328 - 1335
DOI: 10.1109/COASE.2018.8560340

In this paper a novel type of variable impedance control (VIC) is presented. The controller adjusts the impedance depending on the force input of the user. In this way it is easy to accelerate and decelerate. Additionally, for high velocity the damping decreases and vice versa. This approach could be interpreted as a combination of acceleration-dependent VIC and velocity-dependent VIC. To guarantee stability, a stability observer is introduced. The observer is based on a model which describes a combined passive and active behavior of the user. In addition, we present a user study with 45 participants where the differences between VIC, VIC with stability observer and a pure admittance controller were investigated. The results show an improvement of the VIC with stability observer in relation to the pure admittance controller among different categories. With both the variable impedance controller and the variable impedance controller with stability observer, the participants significantly improved their times in comparison to the pure admittance controller, while they maintained the same level of precision. Also the workload was considerably smaller and the user comfort increased with both controllers compared to the usage of the pure admittance controller.

@INPROCEEDINGS{MuellerJanetzky2018a, author={M{\"u}ller, F. and Janetzky, J. and Behrnd, U. and J{\"a}kel, J. and Thomas, U.},
booktitle={ angenommen bei der 14th IEEE International Conference on Automation Science and Engineering (CASE)},
title={User Force-Depending Variable Impedance Control on Human-Robot Interaction},
year={2018},
month={August}}


C.M. Costa, G. Veiga, A. Sousa, L. Rocha, U. Thomas
Automatic Planning of Disassembly Sequences 3D Geometric Reasoning with Information Extraction from Natural Language Instruction Manuals
submitted Journal Robotics and Autonomous Systems, 2018

n/a

n/a


Y. Ding, U. Thomas
A New Capacitive Proximity Sensor for Detecting Ground-Isolated Objects
Proceedings of the 1st Workshop on Proximity Perception in Robotics at IROS 2018, Madrid, Spain, p.7-8
DOI:10.5445/IR/1000088104

In this work, we provide a new measurement method for detecting ground-isolated objects with capacitive sensors. Capacitive sensors find use in sensor skins for safety applications in robotics, where they serve as proximity sensors for proximity servoing. The sensors measure the electric current caused by the capacitive coupling and changing electric field between the sensor electrode and the target. However, these sensors require a return path for the current back to the sensor in order to provide a reference potential, otherwise the targets are electrically floating and not detectable. Our approach allows us to avoid this return path by creating a virtual reference potential in the target with differential signals. We provide experimental results to show the effectiveness of our method compared to state-of-the-art measurement methods.

@inproceedings{Ding2018,
author = {Y. Ding and U. Thomas},
title = {A New Capacitive Proximity Sensor for Detecting Ground-Isolated Objects},
booktitle = {Proceedings of the 1st Workshop on Proximity Perception in Robotics at IROS 2018, Madrid, Spain},
doi = {10.5445/IR/1000088104},
pages = {7-8},
year = {2018},
month = {Aug}
}


Y. Ding, H. Zhang, U. Thomas
Capacitive Proximity Sensor Skin for Contactless Material Detection
IEEE/RSJ International Conference on Intelligent Robotics and Systems, Madrid, Spain 2018
DOI: 10.1109/IROS.2018.8594376

In this paper, we present a method for contactless material detection with capacitive proximity sensing skins. The described sensor element measures proximity with a capacitance based sensor and absolute distance based on time-of-flight (ToF). Attached on a robot, we gain information about the robot’s near field environment. Our new approach extends the current proximity and distance sensing methods and measures the characteristic impedance spectrum of an object to obtain material properties. By this, we gain further material information besides of the near field information in a contactless and non-destructive way. The information is important not only for human-machine-interaction, but also for grasping and manipulation. We evaluate our method with measurements of numerous different materials and present a solution to differentiate between them.

@INPROCEEDINGS{Ding18,
author={Y. Ding and H. Zhang and U. Thomas},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems},
title={Capacitive Proximity Sensor Skin for Contactless Material Detection},
note={to be published},
year={2018},
volume={},
number={},
pages={},
month={Oct}, }


H. Kisner, U. Thomas
Efficient Object Pose Estimation in 3D Point Clouds using Sparse Hash-Maps and Point-Pair Features
50th International Symposium on Robotics (ISR 2018), Munich, Germany
Print ISBN: 978-3-8007-4699-6

This paper presents an image processing pipeline for object pose estimation (3D translation and rotation) in 3D point clouds. In comparison to the state of the art algorithms, the presented approach uses sparse hash-maps in order to reduce the number of hypotheses and the computational costs as early as possible. The image processing pipeline first starts with spectral clustering to estimate object clusters. Then the sparse hash-maps from point-pair features are used to generate hypotheses for each object. After that, each hypothesis is evaluated by considering the visual appearance (shape and colour) with a quality function which returns a comparable confidence value for every hypotheses. The pipeline is able to detect partially occluded and fully visible objects. The proposed approach is evaluated with online available 3D datasets.

@INPROCEEDINGS{Kisn1806:Efficient,
AUTHOR="Hannes Kisner and Ulrike Thomas",
TITLE="Efficient Object Pose Estimation in {3D} Point Clouds using Sparse {Hash-Maps} and {Point-Pair} Features",
BOOKTITLE="50th International Symposium on Robotics (ISR 2018)",
ADDRESS="Munich, Germany",
DAYS=19,
MONTH=jun,
YEAR=2018, }


T. Ebinger, S. Kaden, S. Thomas, R. Andre, N. Amato, U. Thomas
A General and Flexible Search Framework for Disassembly Planning
IEEE International Conference on Robotics and Automation, Brisbane, Australia, 2018
DOI: 10.1109/ICRA.2018.8460483

In this paper we present a new general framework for disassembly sequence planning. This framework is a flexible method for the complete disassembly of an object; versatile in its nature allowing different types of search schemes (exhaustive vs. preemptive), various part separation techniques, and the ability to group parts, or not, into subassemblies to improve the solution efficiency and parallelism. This gives the new ability to approach the disassembly sequence planning problem in a truly hierarchical way. We demonstrate two different search strategies using the framework that can either yield a single solution quickly or provide a spectrum of solutions from which an optimal may be selected. We also develop a method for subassembly identification based on collision information. Our results show improved performance over an iterative motion planning based method for finding a single solution and greater functionality through hierarchical planning and optimal solution search.

@INPROCEEDINGS{ebinger2018general,
title={A General and Flexible Search Framework for Disassembly Planning},
author={Ebinger, Timothy and Kaden, Sascha and Thomas, Shawna and Andre, Robert and Thomas, Ulrike and Amato, Nancy M},
booktitle={Proc. of the 2018 IEEE International Conference on Robotics and Automation (ICRA),
Brisbane,Australia},
year={2018}}


C. Costa, G. Veiga, A. Sousa, L. Rocha, E Oliveira, H. Cardoso, U. Thomas
Automatic Generation of Disassembly Sequences and Exploded Views from SolidWorks Symbolic Geometric Relationships
ICARSC -2018, Portugal, 2018
DOI: 10.1109/ICARSC.2018.8374185

Planning the optimal assembly and disassembly sequence plays a critical role when optimizing the production, maintenance and recycling of products. For tackling this problem, a recursive branch-and-bound algorithm was developed for finding the optimal disassembly plan. It takes into consideration the traveling distance of a robotic end effector along with a cost penalty when it needs to be changed. The precedences and part decoupling directions are automatically computed in the proposed geometric reasoning engine by analyzing the spatial relationships present in SolidWorks assemblies. For accelerating the optimization process, a best-first search algorithm was implemented for quickly finding an initial disassembly sequence solution that is used as an upper bound for pruning most of the non-optimal tree branches. For speeding up the search further, a caching technique was developed for reusing feasible disassembly operations computed on previous search steps, reducing the computational time by more than 18%. As a final stage, our SolidWorks add-in generates an exploded view animation for allowing intuitive analysis of the best solution found. For testing our approach, the disassembly of two starter motors and a single cylinder engine was performed for assessing the capabilities and time requirements of our algorithms.

@INPROCEEDINGS{8374185,
author={C. M. Costa and G. Veiga and A. Sousa and L. Rocha and E. Oliveira and H. L. Cardoso and U. Thomas},
booktitle={2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)},
title={Automatic generation of disassembly sequences and exploded views from solidworks symbolic geometric relationships},
year={2018},
volume={},
number={},
pages={211-218},
keywords={assembly planning;computer animation;control engineering computing;design for disassembly;end effectors;optimisation;production engineering computing;recycling;solid modelling;tree searching;disassembly sequences;robotic end effector;Solidworks symbolic geometric relationships;production planning;branch-and-bound algorithm;best-first search algorithm;caching technique;single cylinder engine;Solid modeling;Robots;Three-dimensional displays;Engines;Design automation;Recycling;Planning},
doi={10.1109/ICARSC.2018.8374185},
ISSN={},
month={April},}


H. Kisner, U. Thomas
Segmentation of 3D Point Clouds Using a New Spectral Clustering Algorithm Without a-Priori Knowledge
In 13th International Conference on Computer Vision Theory and Applications, Madeira, Portugal 27-29 January, 2018
DOI: 10.5220/0006549303150322

For many applications like pose estimation it is important to obtain good segmentation results as a preprocessing step. Spectral clustering is an efficient method to achieve high quality results without a priori knowledge about the scene. Among other methods, it is either the k-means based spectral clustering approach or the bi-spectral clustering approach, which are suitable for 3D point clouds. In this paper, a new method is introduced and the results are compared to these well-known spectral clustering algorithms. When implementing the spectral clustering methods key issues are: how to define similarity, how to build the graph Laplacian and how to choose the number of clusters without any or less a-priori knowledge. The suggested spectral clustering approach is described and evaluated with 3D point clouds. The advantage of this approach is that no a-priori knowledge about the number of clusters is necessary and not even the number of clusters or the number of objects need to be known. With this approach high quality segmentation results are achieved.

@conference{visapp18,
author={Hannes Kisner and Ulrike Thomas},
title={Segmentation of 3D Point Clouds using a New Spectral Clustering Algorithm Without a-priori Knowledge},
booktitle={Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2018},
pages={315-322},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006549303150322},
isbn={978-989-758-290-5}, }


Y. Ding, J. Bonse, R. Andre, U. Thomas
In-hand grasp pose estimation using particle filters in combination with haptic rendering models
International Journal of Humanoid Robotics, Jan, 2018
DOI: 10.1142/S0219843618500020

Specialized grippers used in the industry are often restricted to specific tasks and objects. However, with the development of dexterous grippers, such as humanoid hands, in-hand pose estimation becomes crucial for successful manipulations, since objects will change their pose during and after the grasping process. In this paper, we present a gripping system and describe a new pose estimation algorithm based on tactile sensory information in combination with haptic rendering models (HRMs). We use a 3-finger manipulator equipped with tactile force sensing elements. A particle filter processes the tactile measurements from these sensor elements to estimate the grasp pose of an object. The algorithm evaluates hypotheses of grasp poses by comparing tactile measurements and expected tactile information from CAD-based haptic renderings, where distance values between the sensor and 3D-model are converted to forces. Our approach compares the force distribution instead of absolute forces or distance values of each taxel. The haptic rendering models of the objects allow us to estimate the pose of soft deformable objects. In comparison to mesh-based approaches, our algorithm reduces the calculation complexity and recognizes ambiguous and geometrically impossible solutions.

@article{doi:10.1142/S0219843618500020,
author={Ding, Yitao and Bonse, Julian and Andre, Robert and Thomas, Ulrike},
title={In-Hand Grasping Pose Estimation Using Particle Filters in Combination with Haptic Rendering Models},
journal={International Journal of Humanoid Robotics},
volume={15},
number={01},
pages={1850002},
year={2018},
doi={10.1142/S0219843618500020},
URL={https://www.worldscientific.com/doi/abs/10.1142/S0219843618500020},
eprint={https://www.worldscientific.com/doi/pdf/10.1142/S0219843618500020},
}

2017

U.Thomas, R. Andre, O. Lorenz
Kooperierender Autonomer Roboter in der Montage
Herbstkonferenz Gesellschaft für Arbeitswissenschaften e.V., Chemnitz, 2017

n/a

@InProceedings{Thomas:2017,
author = {Thomas, Ulrike and Andre, Robert and Lorenz, Oliver},
title = {Kooperierender {Autonomer} {Roboter} in der {Montage}},
booktitle = {Dokumentation der Herbstkonferenz - Fokus Mensch im Maschinen- und Fahrzeugbau 4.0},
date = {2017},
location = {Dortmund},
}


F. Müller, F. Weiske, J. Jäkel, U. Thomas, J. Suchý
Human-Robot Interaction with Redundant Robots Using Force-Field-Dependent Variable Impedance Control
in proceedings of IEEE International Symposium on Robotics and Intelligent Sensors, Ottawa, Canada, S. 166-172, 2017, Finalist for Best Paper Award
DOI: 10.1109/IRIS.2017.8250116

This paper introduces an improvement of the assisting force field (AFF) concept for hand-guiding of robotic arms. The AFF guides the user to several reference paths previously learned from experienced users. The AFF concept is extended to anthropomorphic redundant robots, which are used to obtain more flexibility. The redundancy of the robot is used for collision avoidance with the robot's elbow. The motion for collision avoidance should have a low influence on position and orientation of the end effector. A corresponding algorithm is proposed. Using AFF, a force-field-dependent variable impedance controller (FF-VIC) is developed for reducing the settling time and improving the user comfort. For investigating these proposed developments a simulation study was performed in which user comfort and control performance were evaluated. Analyzing the simulation results, a suitable parametrization for the FF-VIC can be found which improves user comfort and settling time. Finally, the results were experimentally validated and the functionality of the collision avoidance is shown.

@INPROCEEDINGS{MuellerJaekel2017b,
author={M{\"u}ller, F. and Weiske, F. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
booktitle={5th IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Ottawa},
title={Human-Robot Interaction with Redundant Robots Using Force-Field-Dependent Variable Impedance},
year={2017},
pages ={166 -- 172},
month={Oktober}}


C. Nissler, Z.-C. Marton, H. Kisner, R. Triebel, U. Thomas
A method for hand-eye and camera-camera calibration in case of limited fields of view
in proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, Canada, 2017
DOI: 10.1109/IROS.2017.8206478

In classical robot-camera calibration, a 6D transformation between the camera frame and the local frame of a robot is estimated by first observing a known calibration object from a number of different view points and then finding transformation parameters that minimize the reprojection error. The disadvantage with this is that often not all configurations can be reached by the end-effector, which leads to an inaccurate parameter estimation. Therefore, we propose a more versatile method based on the detection of oriented visual features, in our case AprilTags. From a collected number of such detections during a defined rotation of a joint, we fit a Bingham distribution by maximizing the observation likelihood of the detected orientations. After a tilt and a second rotation, a camera-to-joint transformation can be determined. In experiments with accurate ground truth available, we evaluate our approach in terms of precision and robustness, both for hand-eye/robot-camera and for cameracamera calibration, with classical solutions serving as a baseline.

@INPROCEEDINGS{8206478,
author={C. Nissler and Z. C. Márton and H. Kisner and U. Thomas and R. Triebel},
booktitle={2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={A method for hand-eye and camera-to-camera calibration for limited fields of view},
year={2017},
volume={},
number={},
pages={5868-5873},
keywords={calibration;cameras;end effectors;parameter estimation;robot vision;camera frame;camera-to-camera calibration;camera-to-joint transformation;end-effector;inaccurate parameter estimation;local frame;observation likelihood;oriented visual features;reprojection error;Calibration;Cameras;Robot kinematics;Robot vision systems;Three-dimensional displays},
doi={10.1109/IROS.2017.8206478},
ISSN={},
month={Sept},}


R. Andre, U. Thomas
Error robust efficient assembly sequence planning with haptic rendering models for rigid and non-rigid assemblies
in proceeding of IEEE International Conference on Robotics and Automation, Singapore, May 29 - June 3, 2017
DOI: 10.1109/ICRA.2017.8262698

This paper presents a new approach for error robust assembly sequence planning which uses haptic ren- dering models (HRMs) for the representation of assemblies. Our assembly planning system uses HRMs for collision test along mating vectors, which are generated by stereographic projection. The planner stores the vectors in 2 2 1 D distance maps providing fast and efficient access for the later evaluation while AND/OR-graphs contain possible sequences. Haptic rendering models facilitate the processing compared to faulty triangle meshes providing fast and geometry independent collision tests as colliding parts can easily be identified and handled accordingly. In addition, part and material related properties can be annotated. We present a fast and simple approach handling approximation inconsistencies, which occur due to discretization errors, based only on the properties of the haptic rendering models. The paper concludes with feasible results for various assemblies and detailed calculation times underlining the effectiveness of our approach.

@INPROCEEDINGS{8262698,
author={R. Andre and U. Thomas},
booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
title={Error robust and efficient assembly sequence planning with haptic rendering models for rigid and non-rigid assemblies},
year={2017},
volume={},
number={},
pages={1-7},
keywords={assembly planning;computational geometry;graph theory;haptic interfaces;mesh generation;production engineering computing;rendering (computer graphics);2 1/2D distance maps;HRMs;approximation inconsistencies;discretization errors;error robust assembly sequence planning;fast geometry independent collision tests;haptic rendering models;material related properties;nonrigid assemblies;rigid assemblies;Haptic interfaces;Planning;Rendering (computer graphics);Robots;Robustness;Solid modeling;Three-dimensional displays},
doi={10.1109/ICRA.2017.8262698},
ISSN={},
month={May},}

2016

F. Müller, J. Jäkel, U. Thomas, J. Suchý
Intuitive Handführung von Robotern als Handlingsysteme
at - Automatisierungstechnik, Vol. 64 Nr. 10, Oktober 2016
DOI: 10.1515/auto-2016-0057

In hand-guided robot-based handling systems the user controls the movement by a force/moment sensor. Exact movement control of up to six degrees of freedom demands much experience. The article describes an approach which improves the usability by means of virtual force fields. To derive rules for the parametrization of the force fields we analyse the stability of the impedance controlled robot and additionally use simulation and experiments.

@article{MuellerJaekel2016a,
author={M{\"u}ller, F. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
year = {2016},
title = {{Intuitive Handf{\"u}hrung von Robotern als Handlingsysteme}},
journal={at - Automatisierungstechnik},
volume={64},
number={10},
month={Oktober},
pages={806 -- 815}
}


F. Müller, N. M. Fischer, J. Jäkel, U. Thomas, J. Suchý
User study for hand-guided robots with assisting force fields
1st IFAC Conference on Cyber-Physical & Human-Systems, Vol. 49 Nr. 32, Florianopolis, Brazil, Dezember 2016
DOI: 10.1016/j.ifacol.2016.12.222

In this paper we present an approach for improving the hand-guiding of robotic arms which is called assisting force field (AFF). The AFF guides the user to certain reference paths enabling the user to keep the desired position and orientation of the end effector. The reference paths are computed using learning data of experienced users. The AFF is realized by an impedance control of the robot. The main focus of this paper is to investigate how the AFF improves the handling of the robot. For this a user study has been performed with 42 participants. The experiments were complemented by questionnaires regarding user comfort and task workload. The results of the study show an obvious improvement on the performance and the ergonomic quantities applying the AFF.

@INPROCEEDINGS{MuellerJaekel2016b,
author={M{\"u}ller, F. and Fischer, N. M. and J{\"a}kel, J. and Thomas, U. and Such{\'y}, J.},
booktitle={1st Conference on Cyber-Physical \& Human-Systems (CPHS) },
title={User study for hand-guided robots with assisting force fields},
pages={246 -- 251},
year={2016},
volume={49},
number={32},
month={Dezember}
}


C. Nissler, Z. Marton, U. Thomas
Evaluation and Improvement of Global Pose Estimation with Multiple AprilTags for Industrial Manipulators
ETFA 2016 - IEEE International Conference on Emerging Technology & Factory Automation Berlin, Germany, September 6 - 9, 2016
DOI: 10.1109/ETFA.2016.7733711

Given the advancing importance for light-weight production materials an increase in automation is crucial. This paper presents a prototypical setup to obtain a precise pose estimation for an industrial manipulator in a realistic production environment. We show the achievable precision using only a standard fiducial marker system (AprilTag) and a state-of-the art camera attached to the robot. The results obtained in a typical working space of a robot cell of about 4.5m × 4.5m are in the range of 15mm to 35mm compared to ground truth provided by a laser tracker. We then show several methods of reducing this error by applying state-of-the-art optimization techniques, which reduce the error significantly to less than 10mm compared to the laser tracker ground truth data and at the same time remove e×isting outliers.

@INPROCEEDINGS{7733711,
author={C. Nissler and S. B×üttner and Z. C. Marton and L. Beckmann and U. Thomasy},
booktitle={2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA)},
title={Evaluation and improvement of global pose estimation with multiple AprilTags for industrial manipulators},
year={2016},
pages={1-8},
keywords={industrial manipulators;pose estimation;production engineering computing;production materials;robot vision;error reduction;global pose estimation;ground truth;industrial manipulators;laser tracker;light-weight production materials;multiple AprilTags;optimization;production environment;robot cell working space;standard fiducial marker system;Cameras;End effectors;Lasers;Pose estimation;Robot vision systems;Service robots}, doi={10.1109/ETFA.2016.7733711},
month={Sept},
}


R. Andre, M. Jokesch, U. Thomas
Reliable Robot Assembly Using Haptic Rendering Models in Combination with Particle Filters
IEEE 12th Conference on Automation, Science and Engineering (CASE), Fort Worth, Texas, USA, August 2016
DOI: 10.1109/COASE.2016.7743532

In this paper we propose a method for reliable and error tolerant assembly for impedance controlled robots by using a particle filter based approach. Our method applies a haptic rendering model obtained from CAD data only, with which we are able to evaluate relative objects poses implemented as particles. The real world force torque sensor values are compared to the model based haptic rendering information to correct pose uncertainties during assembly. We make use of the KUKA LBR iiwa’s intrinsic sensors to measure the position and joint torques representing the real world state. The particle filter is required to compensate pose errors which exceed the assembly clearance during assembly. We show the usefulness of our approach by simulation and real world peg- in-hole tasks.

@INPROCEEDINGS{7743532,
author={R. Andre and M. Jokesch and U. Thomas},
booktitle={2016 IEEE International Conference on Automation Science and Engineering (CASE)},
title={Reliable robot assembly using haptic rendering models in combination with particle filters},
year={2016},
pages={1134-1139},
keywords={CAD;force measurement;force sensors;haptic interfaces;particle filtering (numerical methods);pose estimation;rendering (computer graphics);robotic assembly;torque measurement;CAD data;KUKA LBR iiwa intrinsic sensors;assembly clearance;error tolerant assembly;force torque sensor values;haptic rendering models;impedance controlled robots;joint torque measurement;object pose evaluation;particle filters;peg-in-hole tasks;pose error compensation;pose uncertainties;position measurement;reliable robot assembly;Force;Haptic interfaces;Robot sensing systems;Solid modeling;Surface treatment;Torque},
doi={10.1109/COASE.2016.7743532},
month={Aug},
}


K. Nottensteiner, T. Bodenmüller, M. Kasseker, M. Roa, D. Seidel, A. Stemmer, U. Thomas
A Complete Automated Chain For Flexible Assembly using Recognition, Planning and Sensor-Based Execution
Print ISBN: 978-3-8007-4231-8
Proceedings of 47th International Symposium on Robotics, Munich, June 2016

This paper presents a fully automated system for automatic assembly of aluminum profile constructions. This high grade of automation of the entire process chain requires novel strategies in recognition, planning and execution. The system includes an assembly sequence planner integrated with a grasp planning tool, a knowledge-based reasoning method, a skill-based code generation, and an error tolerant execution engine. The modular structure of the system allows its adaptation to new products, which can prove especially useful for SMEs producing small lot sizes. The system is robust and stable, as demonstrated with the repeated execution of different geometric assemblies.

@INPROCEEDINGS{7559140,
author={K. Nottensteiner and T. Bodenmueller and M. Kassecker and M. A. Roa and A. Stemmer and T. Stouraitis and D. Seidel and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={A Complete Automated Chain for Flexible Assembly using Recognition, Planning and Sensor-Based Execution},
year={2016},
pages={1-8},
month={June},
}


R. Andre, U. Thomas
Anytime Optimal Assembly Sequence Planning
Proceedings of 47th International Symposium on Robotics, Munich, June 2016
Print ISBN: 978-3-8007-4231-8

This paper describes an anytime optimization approach for assembly sequence planning. The well-known AND/OR- graph is applied to represent feasible assembly sequences. An optimal sequence is searched on the basis of this graph. Depending on multiple cost functions for each assembly step the first found plan might not be cost-optimal. Therefore the anytime approach allows finding the global cost-optimal sequence if the complete graph can be continuously parsed. In addition the returned solution can be re-evaluated at a later time allowing further optimizations in the case of changing production environments. The approach has been evaluated with different CAD-models each with varying graph sizes and assembly step costs.

@INPROCEEDINGS{7559139,
author={R. Andre and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={Anytime Assembly Sequence Planning},
year={2016},
pages={1-8},
month={June},
}


A. Kolker, M. Jokesch, U. Thomas
An Optical Tactile Sensor for Measuring Force Values and Directions for Several Soft and Rigid Contacts
Proceeding of 47th International Symposium on Robotics, Munich, June 2016
Print ISBN: 978-3-8007-4231-8

The implementation of robots to manipulate with soft or fragile objects requires usage of high sensible tactile sensors. For many applications, beside the force magnitude, the direction is also important. This paper extends already available ideas and implementations of 3D-tactile sensors. Our sensor can detect a wide range of forces, the direction of forces and shifting forces along the sensor surface for several contact points simultaneously. The amount of capabilities in a single sensor is unique. The underlying concept is a pressure-to-light system. A camera provides images of a structure, which generates geometric shapes on the images according to external acting forces. The shapes are well convenient for image processing and give the ability to use them as reference for the forces. After describing our approach very detailed we show experiments for evaluation, e.g. applying it to grab objects carefully. Finally the future work is discussed, where we plan to bring the sensor to anthropomorphic robot hands.

@INPROCEEDINGS{7559098,
author={A. Kolker and M. Jokesch and U. Thomas},
booktitle={Proceedings of ISR 2016: 47st International Symposium on Robotics},
title={An Optical Tactile Sensor for Measuring Force Values and Directions for Several Soft and Rigid Contacts},
year={2016},
pages={1-6},
month={June},
}

2015

M. Jokesch, J. Suchý, A. Winkler, A. Foss, U. Thomas
Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin-Alignments with Impedance Controlled Robots
ROBOT2015 - Second Iberian Conference on Robotics, Special Session on Future Industrial Robotic Systems, Lissabon, Portugal, 2015
DOI: 10.1007/978-3-319-27149-1_9

In this paper, a generic algorithm for peg-in-hole assembly tasks is suggested. It is applied in the project GINKO were the aim is to connect electric vehicles with charging stations automatically. This paper explains an algorithm applicable for peg-in-hole tasks by means of Cartesian impedance controlled robots. The plugging task is a specialized peg-in-hole task for which 7 pins have to be aligned simultaneously and the peg and the hole have asymmetric shapes. In addition significant forces are required for complete insertion. The initial position is inaccurately estimated by a vision system. Hence, there are translational and rotational uncertainties between the plug, carried by the robot and the socket, situated on the E-car. To compensate these errors three different steps of Cartesian impedance control are performed. To verify our approach we evaluated the algorithm from many different start positions.

@Inbook{Jokesch2016,
author="Jokesch, Michael and Such{\'y}, Jozef and Winkler, Alexander and Fross, Andr{\'e} and Thomas, Ulrike",
editor="Reis, Lu{\'i}s Paulo and Moreira, Ant{\'o}nio Paulo and Lima, Pedro U. and Montano, Luis and Mu{\~{n}}oz-Martinez, Victor",
title="Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin Alignments with Impedance Controlled Robots ",
bookTitle="Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Volume 2",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="105--117",
isbn="978-3-319-27149-1",
doi="10.1007/978-3-319-27149-1_9",
url="http://dx.doi.org/10.1007/978-3-319-27149-1_9"
}


A. Butting, B. Rumpe, C. Schulze, U. Thomas, A. Wortmann
Modeling Reusable Plattform Independent Robot Assembly Processes
Workshop Modeling in Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Workshop on Domain Specific Languages for Robotics, Hamburg, 2015
DOI: arXiv:1601.02452

Smart factories that allow flexible production of highly individualized goods require flexible robots, usable in efficient assembly lines. Compliant robots can work safely in shared environments with domain experts, who have to program such robots easily for arbitrary tasks. We propose a new domain-specific language and toolchain for robot assembly tasks for compliant manipulators. With the LightRocks toolchain, assembly tasks are modeled on different levels of abstraction, allowing a separation of concerns between domain experts and robotics experts: externally provided, platform-independent assembly plans are instantiated by the domain experts using models of processes and tasks. Tasks are comprised of skills, which combine platform-specific action models provided by robotics experts. Thereby it supports a flexible production and re-use of modeling artifacts for various assembly processes.

@article{DBLP:journals/corr/ButtingRSTW16,
author = {Arvid Butting and Bernhard Rumpe and Christoph Schulze and Ulrike Thomas and Andreas Wortmann},
title = {Modeling Reusable, Platform-Independent Robot Assembly Processes},
journal = {CoRR},
volume = {abs/1601.02452},
year = {2016},
url = {http://arxiv.org/abs/1601.02452},
archivePrefix = {arXiv},
eprint = {1601.02452},
timestamp = {Mon, 13 Aug 2018 16:47:16 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/ButtingRSTW16},
bibsource = {dblp computer science bibliography, https://dblp.org}
}


U. Thomas, T. Stouraitis, M. A. Roa
Flexible Assembly through Integrated Assembly Sequence Planning and Grasp Planning
in Proceedings of IEEE International Conference on Automation Science and Engineering, Gothenburgh, Sweden, 2015
DOI: 10.1109/CoASE.2015.7294142

This paper describes an assembly sequence planner able to generate feasible sequences for building a desired assembly. The assembly planner takes geometrical, physical and mechanical constraints into account. Moreover, the planner considers the feasibility of grasps during the planning process and takes into account work-cell specific constraints. The approach uses AND/OR-graphs for planning. The generation of such graphs is implemented by using a specialized graph cut algorithm that employs a dynamically changing priority queue. These graphs are further evaluated by considering the feasibility of grasping sub-assemblies and individual parts during the process. The grasp and the sequence planner are generic, hence the proposed solution can be applied to arbitrary assemblies of rigid parts. The system has been evaluated with different configurations obtained by the combination of standard item-profiles.

@INPROCEEDINGS{7294142,
author={U. Thomas and T. Stouraitis and M. A. Roa},
booktitle={2015 IEEE International Conference on Automation Science and Engineering (CASE)},
title={Flexible assembly through integrated assembly sequence planning and grasp planning},
year={2015},
pages={586-592}, keywords={assembly planning;computer aided production planning;graph colouring;shear modulus;AND-graphs;OR-graphs;arbitrary assemblies;assembly sequence planner;dynamically changing priority queue;geometrical constraints;graph generation;grasp planning;mechanical constraints;physical constraints;planning process;rigid parts;specialized graph cut algorithm;work-cell specific constraints;Assembly;Databases;Fasteners;Force;Grasping;Planning;Robots}, doi={10.1109/CoASE.2015.7294142},
ISSN={2161-8070},
month={Aug},
}


K. Nilsson, B. Rumpe, U. Thomas, A. Wortmann
1st Workshop on Model-Driven Knowledge Engineering for Improved Software Modularity in Robotics and Automation
MDKE 2015, European Robotics Forum 2015
URL: Link

In domestic service robotic applications, complex tasks haveto be fulfilled in close collaboration with humans. We try to integratequalitative reasoning and human-robot interaction by bridging the gapin human and robot representations and by enabling the seamless inte-gration of human notions in the robot’s high-level control. The developedmethods can also be used to abstract away low-level details of specificrobot platforms. These low-level details often pose a problem in re-usingsoftware components and applying the same programs and methods indifferent contexts. When combined with methods for self-maintenancedeveloped earlier these abstractions also allow for seamlessly increasingthe robustness and resilience of different robotic systems with only littleeffort.

@ARTICLE {,
author = "Klas Nilsson Bernhard Rumpe Ulrike Thomas Andreas Wortmann",
title = "1st Workshop on Model-Driven Knowledge Engineering for Improved Software Modularity in Robotics and Automation (MDKE)",
journal = "RWTH Aachen, European Robotics Forum, Vienna (Austria)",
year = "13.03.2015",
volume = "vol. RWTH-2015-01968",
pages = "1-20" }

2014 und früher


2014
M. Jokesch, M. Bdiwi und J. Suchý
  Integration of vision/force robot control for transporting different shaped/colored objects from moving circular conveyor, 2014 IEEE International Symposium on Robotic and Sensors Environments (ROSE) Proceedings, Timisoara, Romania, ISBN 978-1-4799-4926-7
A. Winkler und J. Suchý
  Force Controlled Contour Following by an Industrial Robot on Unknown Objects with Tool Orientation Control, Proceedings of 8th ISR/Robotik 2014, Munich, Germany, ISBN 978-3-8007-3601-0
M. Bdiwi und J. Suchý
  Integration of Vision/Force Robot Control Using Automatic Decision System for Performing Different Successive Tasks, Proceedings of 8th ISR/Robotik 2014, Munich, Germany, ISBN 978-3-8007-3601-0
M. Bdiwi, A. Kolker und J. Suchý
  Transferring Model-Free Objects between Human Hand ans Robot Hand Using Vision/Force Control, . IEEE 11th Int. Multi-Conference on Systems, Signals & Devices, Castelldefels, Spain, DOI 10.1109/SSD.2014.6808785

2013
M. Bdiwi, A. Kolker, J. Suchý und A. Winkler
  Segmentation of Model-Free Objects Carried by Human Hand: Intended for Human-Robot Interaction Applications, 16th International Conference on Advanced Robotics, Montevideo, Uruguay, 2013, ISBN 978-9974-8194-8-1.
M. Bdiwi, A. Kolker, J. Suchý und A. Winkler
  Automated Assistance Robot System for Transferring Model-Free Objects From/To Human Hand Using Vision/Force Control, In: Guido Herrmann, Martin J. Pearson, Alexander Lenz, Paul Bremner, Adam Spiers und Ute Leonards (Hrsg.) Social Robotics (5th International Conference, ICSR 2013, Bristol, UK, October 2013, Proceedings), Seiten 40-53, Springer, 2013, ISBN 978-3-319-02674-9.
A. Winkler und J. Suchý
  Force Controlled Contour Following on Unknown Objects with an Industrial Robot, IEEE International Symposium on Robotic and Sensors Environments, Washington DC, USA, 2013, Seiten 208-213, ISBN 978-1-4673-2938-5.
A. Winkler und J. Suchý
  Identification and Controller Design for the Inverted Pendulum Actuated by a Position Controlled Robot, 18th International Conference on Methods and Models in Automation and Robotics, Miedzyzdroje, Polen, 2013, Seiten 258-263, ISBN 978-1-4673-5507-0.
A. Winkler und J. Suchý
  Robot Force/Torque Control in Assembly Tasks, IFAC Conference on Manufactoring Modelling, Management and Control, Sankt Petersburg, Russland, 2013, Seiten 826-831.
M. Bdiwi, A. Kolker, J. Suchý und A. Winkler
  Real Time Visual and Force Servoing of Human Hand for Physical Human-Robot Interaction: Handing-over Unknown Objects, Workshop on Human Robot Interaction for Assistance and Industrial Robots in IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, 2013.
M. Bdiwi und J. Suchý
  Storage/Retrieval of Inaccurately Placed Objects Using Robot System and Vision/Force Feedback, Transactions on Systems, Signals & Devices, TSSD-A, Vol.8, No.1, Seiten 1-25, 2013.
M. Bdiwi, J. Suchý und A. Winkler
  Handing-over Model-Free Objects to Human Hand with the Help of Vision/Force Robot Control, 10th IEEE International Multi-Conference on Systems, Signals & Devices, Hammamet, Tunesien, 2013.
A. Kolker, A. Winkler, M. Bdiwi und J. Suchý
  Robot Visual Servoing Using the Example of the Inverted Pendulum, 10th IEEE International Multi-Conference on Systems, Signals & Devices, Hammamet, Tunesien, 2013.
A. Kolker, A. Winkler und J. Suchý
  Stabilization of the Robot Mounted Inverted Pendulum by Vision Control, In: Lucia Pachnikova und Mikulas Hajduk (Hrsg.) Robotics in Theory and Practice, Seiten 7-17, Trans Tech Publications, 2013, ISSN 1660-9336.
H. Koch, A. König, A. Weigl-Seitz, K. Kleinmann und J. Suchý
  Multisensor Contour Following With Vision, Force, and Acceleration Sensors for an Industrial Robot, IEEE Transactions on Instrumentation and Measurement, Vol. 62, No. 2, Seiten 268-280, 2013.

2012
A. Winkler und J. Suchý
  Position Feedback in Force Control of Industrial Manipulators - An Experimental Comparison with Basic Algorithms, IEEE International Symposium on Robotic and Sensors Environments, Magdeburg, 2012, Seiten 31-36, ISBN 978-1-4673-2704-6.
M. Bdiwi und J. Suchý
  Library Automation Using Different Structures of Vision-Force Robot Control and Automatic Decision System, IEEE/RSJ Internation Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 2012.
L. Böhme und J. Suchý
  Development of the Fragmented-Motion-Segment Concept for Flexible Joint Robots to Raise Energy Efficiency in Handling Tasks , 10th International IFAC Symposium on Robot Control, Dubrovnik, Kroatien, 2012, Seiten 575-582.
A. Winkler und J. Suchý
  Dynamic Collision Avoidance of Industrial Cooperating Robots Using Virtual Force Fields, 10th International IFAC Symposium on Robot Control, Dubrovnik, Kroatien, 2012, Seiten 265-270.
M. Bdiwi und J. Suchý
  Automatic Decision System for the Structure of Vision-Force Robotic Control, 10th International IFAC Symposium on Robot Control, Dubrovnik, Kroatien, 2012, Seiten 172-177.
H. Koch, A. König, K. Kleinmann, A. Weigl-Seitz und J. Suchý
  Filtering and Corner Detection in Predictive Robotic Contour Following, 7th German Conference on Robotics (Robotik 2012), München, 2012, Seiten 502-507, VDI Verlag, ISBN 978-3-8007-3418-4.
A. Winkler und J. Suchý
  Robot Programming for Surface Finishing based on CAD Model Including External Axes, 7th German Conference on Robotics (Robotik 2012), München, 2012, Seiten 422-427, VDI Verlag, ISBN 978-3-8007-3418-4.
M. Bdiwi und J. Suchý
  Integrated Vision/Force Robot System for Shelving and Retrieving of Imprecisely Placed Objects, 7th German Conference on Robotics (Robotik 2012), München, 2012, Seiten 30-35, VDI Verlag, ISBN 978-3-8007-3418-4.
M. Bdiwi und J. Suchý
  Robot Control System with Integrated Vision/Force Feedback for Automated Sorting System, IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 2012.
M. Bdiwi und J. Suchý
  Storage/retrieval of inaccurately placed objects using robot system and vision/force feedback, 9th IEEE International Multi-Conference on Systems, Signals & Devices, Chemnitz, 2012.
H. Koch und J. Suchý
  Multisensor-Konturverfolgung an nachgiebigen Objekten, 46. Regelungstechnisches Kolloqium in Boppard, Vortrag, 2012.

2011
H. Koch, A. König, K. Kleinmann, A. Weigl-Seitz und J. Suchý
  Multisensor robotic contour following on deformable objects, IASTED International Conference on Robotics (Robo 2011), Pittsburgh, USA, Seiten 150-156, 2011.
M .Bdiwi, J. Suchý und A. Winkler
  Kombinierte Kraftregelung und Visual Servoing eines Roboters mit einem offenen Steuerungssystem, Scientific Reports, IWKM Moderne Automatisierungstechnik/Robotik, Hochschule Mittweida (FH), 2011, ISSN 1437-7624.
L. Böhme
  Die vorteilhafte Nutzung einer anpassbaren Nachgiebigkeit in einem Robotergelenk, Scientific Reports, IWKM Moderne Automatisierungstechnik/Robotik, Hochschule Mittweida (FH), 2011, ISSN 1437-7624.
H. Koch, A. König, A. Weigl-Seitz, K. Kleinmann und J. Suchý
  Force, acceleration and vision sensor fusion for contour following tasks with an industrial robot, IEEE International Symposium on in Robotic and Sensors Environments (ROSE), Montreal, Kanada, Seiten 1-6, 2011.
A. Winkler und J. Suchý
  Vision Based Collision Avoidance of Industrial Robots, 18th IFAC World Congress, Mailand, Italien, Seiten 9452-9457, 2011.
H. Koch, A. König, K. Kleinmann, A. Weigl-Seitz und J. Suchý
  Predictive robotic contour following using laser-camera-triangulation, IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Budapest, Ungarn, Seiten 422-427, 2011.
A. Winkler und J. Suchý
  Virtuelle Kraftfelder zwischen Roboterarmen zur Kollisionsvermeidung, VDI-Berichte 2143 (Automation 2011), Seiten 249-252, 2011, ISBN 978-3-18-092143-3.
M. Bdiwi, A. Winkler, J. Suchý und G. Zschocke
  Traded and shared vision-force robot control for improved impact control, 8th IEEE International Multi-Conference on Systems, Signals & Devices, Sousse, Tunesien, 2011.

2010
A. Winkler, L. Böhme und J. Suchý
  Cooperation of Robots Under Force/ Torque Control, 55. Internationales Wissenschaftliches Kolloquium, Seiten 414-419, Ilmenau, 2010, ISBN 978-3-938843-53-6.
A. Winkler und J. Suchý
  Einsatz eines 12D-Kraft-/ Momentsensors an einem Industrieroboter, Scientific Reports Nr. 6, IWKM Industrielle Steuerungen/Robotik, Seiten 48-51, Hochschule Mittweida (FH), 2008, ISSN 1437-7624.
A. Winkler und J. Suchý
  Sensor Guided Robot Motions Using the Example of the Inverted Pendulum, Joint Conference on Robotics - 41th International Symposium on Robotics and 6th German Conference on Robotics, Seiten 69-75, München, 2010, ISBN 978-3-8007-3273-9.
A. Winkler und J. Suchý
  Kollisionsvermeidung bei stationären Robotern mit Hilfe künstlicher Kraftfelder, 44. Regelungstechnisches Kolloqium in Boppard, Vortrag, 2010.

2009
J. Suchý und A. Winkler
  Mensch-Roboter-Interaktion mit Hilfe von Kraft-/ Momentsensoren, Innovationsforum: Anwendung der Haptik in der robotergestützten Chirurgie, Magdeburg, Vortrag, 2009.
A. Winkler und J. Suchý
  Erecting and Balancing of the Inverted Pendulum by an Industrial Robot, 9th International IFAC Symposium on Robot Control, Gifu, Japan, 2009.
A. Winkler und J. Suchý
  Intuitive Collision Avoidance of Robots Using Charge Generated Virtual Force Fields, In: Torsten Kröger und Friedrich M. Wahl (Hrsg.) Advances in Robotics Research, Seiten 77-87, Springer, 2009, ISBN 978-3-642-01212-9.

2008
A. Winkler und J. Suchý
  Einsatz eines 12D-Kraft-/ Momentsensors an einem Industrieroboter, Scientific Reports Nr. 6, IWKM Industrielle Steuerungen/Robotik, Seiten 48-51, Hochschule Mittweida (FH), 2008, ISSN 1437-7624.
A. Winkler
  Kraftbasierte Mensch-Roboter-Interaktion, VDM-Verlag, 2008, ISBN 978-3-639-07286-0.
J. Suchý und A. Winkler
  Beispiele zum Einsatz von Kraft-/ Momentsensorik an Industrierobotern, 4. IPA-Workshop Bearbeiten mit Industrierobotern / Technologien-Anwendungen-Trends, Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA Stuttgart, Vortrag, 2008.
A. Winkler und J. Suchý
  Lastidentifikation und Messung dynamischer Kräfte und Momente mit einem 12D-Kraft-/ Momentsensor, In: VDI Berichte 2012 (Robotik 2008), Seiten 33-36, VDI Verlag, 2008, ISBN 978-3-18-092012-2.

2007
A. Winkler und J. Suchý
  Dynamic Force/Torque Measurement Using a 12DOF Sensor, 20th IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, Seiten 1870-1875, 2007.
A. Winkler und J. Suchý
  Possibilities of force based interaction with robot manipulators, In: Nilanjan Sarkar (Hrsg.) Human-Robot-Interaction, I-Tech Education and Publishing, 2007, ISBN 978-3-902613-13-4.
A. Winkler und J. Suchý
  Position Based Force Control of an Industrial Manipulator, 52. Internationales Wissenschaftliches Kolloquium, Seiten 151-156, Ilmenau, 2007, ISBN 978-3-939473-17-6.

2006
A. Winkler und J. Suchý
  Force-guided motions of a 6-d.o.f industrial robot with a joint space approach, Advanced Robotics, Vol. 20, Nr. 9, Seiten 1067-1084, 2006. Link
A. Winkler
  Ein Beitrag zur kraftbasierten Mensch-Roboter-Interaktion. Dissertation, Technische Universit�t Chemnitz, 2006. Link
A. Winkler und J. Suchý
  An Approach to Compliant Motion of an Industrial Manipulator, 8th International IFAC Symposium on Robot Control, Bologna, Italien, 2006.
A. Winkler und J. Suchý
  Sensorless Force Guided Motions of an Industrial Robot Based on Motor Currents, Joint Conference on Robotics - 37th International Symposium on Robotics and 4th German Conference on Robotics, München, 2006, ISBN 3-18-091956-6.

2005
A. Winkler und J. Suchý
  Ausgewählte Anwendungen zum Einsatz von Kraft-/ Momentregelung an Industrierobotern, Scientific Reports Nr. 11, IWKM Robotik, Seiten 1-4, Hochschule Mittweida (FH), 2005, ISSN 1437-7624.
A. Winkler und J. Suchý
  Novel Joint Space Force Guidance Algorithm with Laboratory Robot System, 16th IFAC World Congress, Prag, Tschechische Republik, 2005.
A. Winkler und J. Suchý
  Kraftgeführte Bewegungen mit einem STÄUBLI-Industrieroboter, AT\&P journal PLUS6, Seiten 94-98, 2005, ISSN 1336-5010.
A. Winkler und J. Suchý
  Kraftgeführte Bewegung stationärer Roboter durch Steuerung im Gelenkraum, 39. Regelungstechnisches Kolloqium in Boppard, Vortrag, 2005.

2004
A. Winkler und J. Suchý
      Kraft-/ Momentregelung von Industrierobotern, Scientific Reports Nr. 4, 1. Workshop Robotik, Seiten 22-30, Hochschule Mittweida (FH), 2004, ISBN 3-18-091841-1.
A. Winkler und J. Suchý
  Kraftgeführte Bewegungen mit einem Standardindustrieroboter, In: VDI Berichte 1841 (Robotik 2004), Seiten 527-534, VDI Verlag, 2004, ISBN 3-18-091841-1.

Prof. Dr.-Ing. Ulrike Thomas bis 2014

Fusing color and geometry information for understanding cluttered scenes
U. Thomas, S. Kriegel, M. Suppa
In IEEE/RSJ International Conference Intelligent Robots and Systems, Workshop on Clutter,, 2014
2014
Efficient Assembly Sequence Planning
U. Thomas
Technical report, DLR, Institute für Robotik und Mechatronik, December 2013
2013
A New Skill Based Robot Programming Language Using UML/P Statecharts
U. Thomas , G. Hirzinger, B. Rumpe, C. Schulze A. Wortmann.
In IEEE International Conference on Robotics and Automation, Karlsruhe, Mai 2013
2013
LightRocks: Modelling Cognitive Behaviour for Robots with Action Primitives using the UMLP
B. Rumpe, C. Schulz, A. Wortmann, U. Thomas
In Workshop on Formal Composition of Motion Primitives, Cyber Physical Systems Week, Philiadelphia, USA, April 2013
2013
Stable Pose Estimation Using Ransac with Triple Point Feature Hash Maps and Symmetry Exploration
U. Thomas
In IAPR International Conference on Machine Vision Applications, Kyoto, Japan, May 2013
2013
Planning sensor feedback for assembly skills by using sensor state space graphs
U. Thomas, F. M. Wahl
In International Conference on Intelligent Robotics and Applications, Montreal, Canada, October 2012
2012
Real-time Localization of Objects in Time-of-flight Depth Images
U. Thomas
In International Conference on Computer Vision Theory and Application, Rom, Italien, February 2012
2012
A new probabilistic path planning algorithm for (dis) assembly tasks
U. Thomas, R. Iser
In ISR/Robotik 2010, München, June 2010
2010
Efficient Image Data Processing Based on an Airborne Distributed System Architecture
O. Meynberg, D. Rosenbaum, F. Kurz, J. Leitloff, U. Thomas
In Canadian Geomatics Conference 2010, ISPRS Commission I, Calgary, Kanada, October 2010
2010
A Unified Notation for Serial, Parallel and Hybrid Kinematic Structures
U. Thomas, F. M. wahl
In D. Schütz F. M. Wahl, editor, Robotic Systems for Handling and Assembly, Series: Springer Tracts in Advanced Robotics, volume 67. Springer, November 2010
2010
Assembly planning and task planning—two prerequisites for automated robot programming
U. Thomas, F. M. Wahl
In D. Schütz and F.M. Wahl, editor, Robotic Systems for Handling and Assembly, volume 67 of Tracts in Advanced Robotics. Springer, November 2010
2010
A new software/hardware architecture for real time image processing of wide area airborne camera images
U. Thomas, D. Rosenbaum, F. Kurz, S. Suri, P. Reinartz
Journal of Real-Time Image Processing, 4(3):229–244, 2009
2009
Automatisierte Programmierung von Robotern für Montageaufgaben
U. Thomas
In D. Wagner, editor, Ausgezeichnete Informatikdissertationen 2008. GI-Edition, 2009
2009
Near real time airborne monitoring system for disaster and traffic applications
F. Kurz, D. Rosenbaum, U. Thomas, J. Leitloff, G Palubinskas, K Zeller, P. Reinartz
In ISPRS Workshop, Hannover, 2009
2009
Definition and execution of a generic assembly programming paradigm
J. Maaß, S. Molkenstruck, U. Thomas, J. Hesselbach, F. Wahl
Assembly Automation Journal, Emerald, 2008
2008
Automatic traffic monitoring from an airborne wide angle camera system
D. Rosenbaum, B. Charmette, F. Kurz, S. Suri, U. Thomas, P. Reinartz
In XXI ISPRSCongress, Peking, China, July 2008
2008
Automatisierte Programmierung von Robotern für Montageaufgaben
U. Thomas
PhD thesis,Technische Universität Braunschweig, 2008
2008
GPU-based orthorectification of digital airborne camera images in real time
U. Thomas, F. Kurz, D. Rosenbaum, R. Mueller, P. Reinartz
In Proceedings of the XXI ISPRS Congress, Peking, China, July 2008
2008
Near Real Time Processing of DSM from Airborne Digital Camera System for Disaster Monitoring
F. Kurz, V. Ebner, D. Rosenbaum, U. Thomas, P. Reinartz
In XXXVII Comm. IV. XXI Congress of the ISPRS, Peking, China, July 2008
2008
Towards Automatic Near Real-Time Traffic Monitoring with an Airborne Wide Angle Camera System
D. Rosenbaum F. Kurz,U. Thomas, S. Suri, P. Reinartz
European Transport Research, December 2008
2008
Towards Automated Robot Programming
S. Molkenstruck,U. Thomas, F. M. Wahl
In 3nd International Conference of the Collaborative Research Center 562, Braunschweig, April 2008
2008
Automated Generation of Skill Primitive Nets for Assembly
U. Thomas, S. Molkenstruck, F. M. Wahl
In VDI/ Robotik 2008, München, June 2008
2008
Multi sensor fusion in robot assembly using particle filters
U. Thomas, S. Molkenstruck, R. Iser, F. M. Wahl
In IEEE International Conference on Robotics and Automation, Rom, Italien, May 2007
2007
Autonomous execution of automatically planned robot tasks based on force torque maps
U. Thomas, A. Movshyn, F. M. Wahl
In International Symposium Robotics Research/VDI Robotik, München, May 2006
2006
Towards a new concept of robot programming in high speed assembly applications
U. Thomas, F. M. Wahl, J. Maass, J. Hesselbach
In IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Kanada, 2005
2005
Towards Automated Robot Programming
U. Thomas, F. M. Wahl
In 2nd International Conference of the Collaborative Research Center 562, Braunschweig, May 2005
2005
PROSA - A Generic Control Architecture for Parallel Robots
N. Kohn, M. Kolbus, T. Reisinger, K. Diethers, J. Steiner, U. Thomas
In Mechatronics and Robotics, Aachen, September 2004
2004
Compliant motion programming: The task frame formalism revisited
T. Kröger, B. Finkemeyer, U. Thomas, F. M. Wahl
In Mechatronics and Robotics, Aachen, September 2004
2004
An integrative approach for multi-sensor based robot task programming
U. Thomas, J. Florke, S. Detering, F. M. Wahl
In IEEE Conference on Robotics and Automation, New Orleans, USA, May 2004
2004
A General and Uniform Notation for any Kinematic Structure
U. Thomas, F. Wahl
In Mechatronics and Robotics, Aachen, September 2004
2004
Error-tolerant execution of complex robot tasks based on skill primitives
U. Thomas, B. Finkemeyer, T. Kroger, F. M. Wahl
In IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003
2003
Efficient assembly sequence planning using stereographical projections of c-space obstacles
U. Thomas, M. Barrenscheen, F. M. Wahl
In IEEE International Symposium on Assembly and Task Planning, Besancon, Frankreich, July 2003. Best Paper Award
2003
A new framework for task oriented sensor based robot programming and verification
K. Diethers, T. Firley, T. Kröger, U. Thomas
In IEEE International Conference on Advanced Robotics, Portugal, June 2003
2003
Eine Systematik zur universellen Beschreibung für serielle, parallele und hybride Roboterstrukturen
U. Thomas, I. Maciuszek, F. M. Wahl
In VDI/Robotik, Ludwigsburg, June 2002
2002
Robot programming-From simple moves to complex robot tasks
F. M. Wahl, U. Thomas
In F.M. Wahl and U. Thomas, editors, First International Colloquium Collaborative Research Centre, 562, Braunschweig May,2002
2002
A unified notation for serial, parallel, and hybrid kinematic structures
U. Thomas, I. Maciuszek, F. M. Wahl
In IEEE International Conference on Robotics and Automation, Washington D.C.,USA, May 2002
2002
Sensorbasierte Ausführung von Roboteraufgaben auf der Basis von Aktionsprimitven
U. Thomas, W. An, F. M. Wahl
In VDI/Robotik, Ludwigsburg, June 2002
2002
A system for automatic planning, evaluation and execution of assembly sequences for industrial robots
U. Thomas, F. M. Wahl
Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on, 2001
2001

Press Articles