Super-resolution-based Change Detection Network with Stacked Attention
Module for Images with Different Resolutions
- URL: http://arxiv.org/abs/2103.00188v1
- Date: Sat, 27 Feb 2021 11:17:40 GMT
- Title: Super-resolution-based Change Detection Network with Stacked Attention
Module for Images with Different Resolutions
- Authors: Mengxi Liu, Qian Shi, Andrea Marinoni, Da He, Xiaoping Liu, Liangpei
Zhang
- Abstract summary: Change detection plays a vital role in ecological protection and urban planning.
Traditional subpixel-based methods for change detection using images with different resolutions may lead to substantial error accumulation.
We propose a super-resolution-based change detection network (SRCDNet) with a stacked attention module.
- Score: 20.88671966047938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Change detection, which aims to distinguish surface changes based on
bi-temporal images, plays a vital role in ecological protection and urban
planning. Since high resolution (HR) images cannot be typically acquired
continuously over time, bi-temporal images with different resolutions are often
adopted for change detection in practical applications. Traditional
subpixel-based methods for change detection using images with different
resolutions may lead to substantial error accumulation when HR images are
employed; this is because of intraclass heterogeneity and interclass
similarity. Therefore, it is necessary to develop a novel method for change
detection using images with different resolutions, that is more suitable for HR
images. To this end, we propose a super-resolution-based change detection
network (SRCDNet) with a stacked attention module. The SRCDNet employs a super
resolution (SR) module containing a generator and a discriminator to directly
learn SR images through adversarial learning and overcome the resolution
difference between bi-temporal images. To enhance the useful information in
multi-scale features, a stacked attention module consisting of five
convolutional block attention modules (CBAMs) is integrated to the feature
extractor. The final change map is obtained through a metric learning-based
change decision module, wherein a distance map between bi-temporal features is
calculated. The experimental results demonstrate the superiority of the
proposed method, which not only outperforms all baselines -with the highest F1
scores of 87.40% on the building change detection dataset and 92.94% on the
change detection dataset -but also obtains the best accuracies on experiments
performed with images having a 4x and 8x resolution difference. The source code
of SRCDNet will be available at https://github.com/liumency/SRCDNet.
Related papers
- Parameter-Inverted Image Pyramid Networks [49.35689698870247]
We propose a novel network architecture known as the Inverted Image Pyramid Networks (PIIP)
Our core idea is to use models with different parameter sizes to process different resolution levels of the image pyramid.
PIIP achieves superior performance in tasks such as object detection, segmentation, and image classification.
arXiv Detail & Related papers (2024-06-06T17:59:10Z) - BD-MSA: Body decouple VHR Remote Sensing Image Change Detection method
guided by multi-scale feature information aggregation [4.659935767219465]
The purpose of remote sensing image change detection (RSCD) is to detect differences between bi-temporal images taken at the same place.
Deep learning has been extensively used to RSCD tasks, yielding significant results in terms of result recognition.
arXiv Detail & Related papers (2024-01-09T02:53:06Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - A Dual Attentive Generative Adversarial Network for Remote Sensing Image
Change Detection [6.906936669510404]
We propose a dual attentive generative adversarial network for achieving very high-resolution remote sensing image change detection tasks.
The DAGAN framework has better performance with 85.01% mean IoU and 91.48% mean F1 score than advanced methods on the LEVIR dataset.
arXiv Detail & Related papers (2023-10-03T08:26:27Z) - Continuous Cross-resolution Remote Sensing Image Change Detection [28.466756872079472]
Real-world applications raise the need for cross-resolution change detection, aka, CD based on bitemporal images with different spatial resolutions.
We propose scale-invariant learning to enforce the model consistently predicting HR results given synthesized samples of varying resolution differences.
Our method significantly outperforms several vanilla CD methods and two cross-resolution CD methods on three datasets.
arXiv Detail & Related papers (2023-05-24T04:57:24Z) - IDAN: Image Difference Attention Network for Change Detection [3.5366052026723547]
We propose a novel image difference attention network (IDAN) for remote sensing image change detection.
IDAN considers the differences in regional and edge features of images and thus optimize the extracted image features.
The experimental results demonstrate that the F1-score of IDAN improves 1.62% and 1.98% compared to the baseline model on WHU dataset and LEVIR-CD dataset.
arXiv Detail & Related papers (2022-08-17T13:46:13Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - Learning Resolution-Adaptive Representations for Cross-Resolution Person
Re-Identification [49.57112924976762]
Cross-resolution person re-identification problem aims to match low-resolution (LR) query identity images against high resolution (HR) gallery images.
It is a challenging and practical problem since the query images often suffer from resolution degradation due to the different capturing conditions from real-world cameras.
This paper explores an alternative SR-free paradigm to directly compare HR and LR images via a dynamic metric, which is adaptive to the resolution of a query image.
arXiv Detail & Related papers (2022-07-09T03:49:51Z) - Semantic Change Detection with Asymmetric Siamese Networks [71.28665116793138]
Given two aerial images, semantic change detection aims to locate the land-cover variations and identify their change types with pixel-wise boundaries.
This problem is vital in many earth vision related tasks, such as precise urban planning and natural resource management.
We present an asymmetric siamese network (ASN) to locate and identify semantic changes through feature pairs obtained from modules of widely different structures.
arXiv Detail & Related papers (2020-10-12T13:26:30Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.