Timebased local OctoMaps
Autonomous navigation of our rough-terrain rovers implies the need of a good representation of their near surrounding. In order to archive this we fuse several of their sensors into one representation called OctoMap. But moving obstacles can produce artefacts, leading to untraversable regions. Furthermore, the map itself is increasing in size while discovering new places. Even though we are only interested in the near surrounding of the rovers.
Our approach to these problems is the usage of timestamps within the map. If a certain region was not updated within a given interval, it will be set to free space or deleted from the map. This first option is an existing solution and the second option reflects our new alternative. The proposed approach is provided as open source.
Within this experiment one rover is standing still, while a second rover is slowly crossing its close surrounding. Since our visual depth sensors are tilted downward only the lower part of the driving rover is updated almost on time. In contrast the upper part is only sensed by the slowly turning laser, creating several artifacts. Since both timebased implementations show similar results only our implementation is compared to the regular octomap implementation. As shown in the image and in the video the timebased implementations remove artefacts as expected. Only those too new to be degraded are within the map for a certain amount of time.
Within this experiment, one rover is driving along a rectangular path of approximately 90m×40m while mapping its surrounding. The final result of both implementations regarding visible, occupied voxels are alike and therefore not explicitly shown. However a distinctive difference is seen when comparing free voxels. Here our implementation is clearly more memory saving.
- 2016) Properties of timebased local OctoMaps. In Proc. of Intl. Conf. on Intelligent Robots and Systems (IROS) Workshop on State Estimation and Terrain Perception for All Terrain Mobile Robots (
Source Code of our extension for degrading voxels:
The basic framework and its description can be found here:
Please contact M.Sc. Peter Weissig for any comments or questions.