Abstract
In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper.
Original language | English |
---|---|
Article number | 7948767 |
Pages (from-to) | 3435-3447 |
Number of pages | 13 |
Journal | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
Volume | 10 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2017 |
Keywords
- Change detection
- convolutional neural networks (CNN)
- deep learning
- image analysis and data fusion
- multimodal
- multiresolution
- multisource
- random fields
- tracking
- video from space