Jump to main content
Automation Technology
Appearance Change Prediction

Appearance Change Prediction for Long-Term Autonomy in Changing Environments

Changing environments pose a serious problem to current robotic systems aiming at long term operation. While place recognition systems perform reasonably well in static or low-dynamic environments, severe appearance changes that occur between day and night, between different seasons or different local weather conditions remain a challenge.

We propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable.

State of the art approaches to place recognition will attempt to directly match two scenes even if they have been observed under extremely different environmental conditions. This is prone to error and leads to bad recognition results. Instead, we propose to predict how the query scene (the winter image) would appear under the same environmental conditions as the database images (summer). This prediction process uses a dictionary that exploits the systematic nature of the seasonal changes and is learned from training data.

Superpixel-based Appearance Change Prediction (SP-ACP)

SP-ACP is a first implementation of this idea. It combines a training and a prediction phase:

Step I: SP-ACP training

SP-ACP learning a dictionary between images under different environmental conditions (e.g. winter and summer).

During training, the images are first segmented into superpixels and a descriptor is calculated for each superpixel. These descriptors are then clustered to obtain a vocabulary of visual words for each condition. In a final step, a dictionary that translates between both vocabularies is learned. This can be done due to the known pixel-accurate correspondences between the input images.

Step II: SP-ACP prediction

SP-ACP predicting the appearance of a query image under different environmental conditions: How would the current winter scene appear in summer?

To predict a winter query image from winter to summer, it is first segmented into superpixels and a descriptor is calculated for each of these segments. With this descriptor each superpixel can be classified as one of the visual words from the vocabulary. This word image representation can then be translated into the vocabulary of the target scene (e.g. summer) through the dictionary learned during the training phase. The result of the process is a synthesized image that predicts the appearance of the winter query image in summer time.

SP-ACP results

The following videos accompanied our 2013 ECMR paper and demonstrate the prediction capabilities of the first version of the proposed system on the Nordland dataset. The original video footage was produced by NRKbeta and is available under a creative commons licence. For further results, please have a look at the publications at the bottom of this page.

Related Publications

Neubert, P. (2015) Superpixels and their Application for Visual Place Recognition in Changing Environments. Dissertation: TU Chemnitz

Neubert, P., Sünderhauf, N. & Protzel, P. (2015) Superpixel-based appearance change prediction for long-term navigation across seasons. Robotics and Autonomous Systems, Vol. 69:15-27. DOI: 10.1016/j.robot.2014.08.005

Neubert, P., Sünderhauf, N. & Protzel, P. (2013) Appearance Change Prediction for Long-Term Navigation Across Seasons. In Proc. of European Conference on Mobile Robotics (ECMR)

Sünderhauf, N., Neubert, P. & Protzel, P. (2013) Predicting the Change -- A Step Towards Life-Long Operation in Everyday Environments. In Proc. of Robotics: Science and Systems (RSS) Robotics Challenges and Vision Workshop. RCV 2013 3rd Best Paper Award Winner

Sünderhauf, N., Lange, S. & Protzel, P. (2013) Incremental Sensor Fusion in Factor Graphs with Unknown Delays. In Proc. of ESA Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA)