IDAN: Image Difference Attention Network for Change Detection
- URL: http://arxiv.org/abs/2208.08292v1
- Date: Wed, 17 Aug 2022 13:46:13 GMT
- Title: IDAN: Image Difference Attention Network for Change Detection
- Authors: Hongkun Liu, Zican Hu, Qichen Ding, Xueyun Chen
- Abstract summary: We propose a novel image difference attention network (IDAN) for remote sensing image change detection.
IDAN considers the differences in regional and edge features of images and thus optimize the extracted image features.
The experimental results demonstrate that the F1-score of IDAN improves 1.62% and 1.98% compared to the baseline model on WHU dataset and LEVIR-CD dataset.
- Score: 3.5366052026723547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remote sensing image change detection is of great importance in disaster
assessment and urban planning. The mainstream method is to use encoder-decoder
models to detect the change region of two input images. Since the change
content of remote sensing images has the characteristics of wide scale range
and variety, it is necessary to improve the detection accuracy of the network
by increasing the attention mechanism, which commonly includes:
Squeeze-and-Excitation block, Non-local and Convolutional Block Attention
Module, among others. These methods consider the importance of different
location features between channels or within channels, but fail to perceive the
differences between input images. In this paper, we propose a novel image
difference attention network (IDAN). In the image preprocessing stage, we use a
pre-training model to extract the feature differences between two input images
to obtain the feature difference map (FD-map), and Canny for edge detection to
obtain the edge difference map (ED-map). In the image feature extracting stage,
the FD-map and ED-map are input to the feature difference attention module and
edge compensation module, respectively, to optimize the features extracted by
IDAN. Finally, the change detection result is obtained through the feature
difference operation. IDAN comprehensively considers the differences in
regional and edge features of images and thus optimizes the extracted image
features. The experimental results demonstrate that the F1-score of IDAN
improves 1.62% and 1.98% compared to the baseline model on WHU dataset and
LEVIR-CD dataset, respectively.
Related papers
- Enhancing Perception of Key Changes in Remote Sensing Image Change Captioning [49.24306593078429]
We propose a novel framework for remote sensing image change captioning, guided by Key Change Features and Instruction-tuned (KCFI)
KCFI includes a ViTs encoder for extracting bi-temporal remote sensing image features, a key feature perceiver for identifying critical change areas, and a pixel-level change detection decoder.
To validate the effectiveness of our approach, we compare it against several state-of-the-art change captioning methods on the LEVIR-CC dataset.
arXiv Detail & Related papers (2024-09-19T09:33:33Z) - Siamese Meets Diffusion Network: SMDNet for Enhanced Change Detection in
High-Resolution RS Imagery [7.767708235606408]
We propose a new network, Siamese-U2Net Feature Differential Meets Network (SMDNet)
This network combines the Siam-U2Net Feature Differential (SU-FDE) and the denoising diffusion implicit model to improve the accuracy of image edge change detection.
Our method's combination of feature extraction and diffusion models demonstrates effectiveness in change detection in remote sensing images.
arXiv Detail & Related papers (2024-01-17T16:48:55Z) - BD-MSA: Body decouple VHR Remote Sensing Image Change Detection method
guided by multi-scale feature information aggregation [4.659935767219465]
The purpose of remote sensing image change detection (RSCD) is to detect differences between bi-temporal images taken at the same place.
Deep learning has been extensively used to RSCD tasks, yielding significant results in terms of result recognition.
arXiv Detail & Related papers (2024-01-09T02:53:06Z) - Frequency Domain Modality-invariant Feature Learning for
Visible-infrared Person Re-Identification [79.9402521412239]
We propose a novel Frequency Domain modality-invariant feature learning framework (FDMNet) to reduce modality discrepancy from the frequency domain perspective.
Our framework introduces two novel modules, namely the Instance-Adaptive Amplitude Filter (IAF) and the Phrase-Preserving Normalization (PPNorm)
arXiv Detail & Related papers (2024-01-03T17:11:27Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - VcT: Visual change Transformer for Remote Sensing Image Change Detection [16.778418602705287]
We propose a novel Visual change Transformer (VcT) model for visual change detection problem.
Top-K reliable tokens can be mined from the map and refined by using the clustering algorithm.
Extensive experiments on multiple benchmark datasets validated the effectiveness of our proposed VcT model.
arXiv Detail & Related papers (2023-10-17T17:25:31Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Semantic Change Detection with Asymmetric Siamese Networks [71.28665116793138]
Given two aerial images, semantic change detection aims to locate the land-cover variations and identify their change types with pixel-wise boundaries.
This problem is vital in many earth vision related tasks, such as precise urban planning and natural resource management.
We present an asymmetric siamese network (ASN) to locate and identify semantic changes through feature pairs obtained from modules of widely different structures.
arXiv Detail & Related papers (2020-10-12T13:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.