Explicit Change Relation Learning for Change Detection in VHR Remote
Sensing Images
- URL: http://arxiv.org/abs/2311.07993v1
- Date: Tue, 14 Nov 2023 08:47:38 GMT
- Title: Explicit Change Relation Learning for Change Detection in VHR Remote
Sensing Images
- Authors: Dalong Zheng, Zebin Wu, Jia Liu, Chih-Cheng Hung, and Zhihui Wei
- Abstract summary: We propose a network architecture NAME for the explicit mining of change relation features.
The change features of change detection should be divided into pre-changed image features, post-changed image features and change relation features.
Our network performs better, in terms of F1, IoU, and OA, than those of the existing advanced networks for change detection.
- Score: 12.228675703851733
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Change detection has always been a concerned task in the interpretation of
remote sensing images. It is essentially a unique binary classification task
with two inputs, and there is a change relationship between these two inputs.
At present, the mining of change relationship features is usually implicit in
the network architectures that contain single-branch or two-branch encoders.
However, due to the lack of artificial prior design for change relationship
features, these networks cannot learn enough change semantic information and
lose more accurate change detection performance. So we propose a network
architecture NAME for the explicit mining of change relation features. In our
opinion, the change features of change detection should be divided into
pre-changed image features, post-changed image features and change relation
features. In order to fully mine these three kinds of change features, we
propose the triple branch network combining the transformer and convolutional
neural network (CNN) to extract and fuse these change features from two
perspectives of global information and local information, respectively. In
addition, we design the continuous change relation (CCR) branch to further
obtain the continuous and detail change relation features to improve the change
discrimination capability of the model. The experimental results show that our
network performs better, in terms of F1, IoU, and OA, than those of the
existing advanced networks for change detection on four public very
high-resolution (VHR) remote sensing datasets. Our source code is available at
https://github.com/DalongZ/NAME.
Related papers
- Enhancing Perception of Key Changes in Remote Sensing Image Change Captioning [49.24306593078429]
We propose a novel framework for remote sensing image change captioning, guided by Key Change Features and Instruction-tuned (KCFI)
KCFI includes a ViTs encoder for extracting bi-temporal remote sensing image features, a key feature perceiver for identifying critical change areas, and a pixel-level change detection decoder.
To validate the effectiveness of our approach, we compare it against several state-of-the-art change captioning methods on the LEVIR-CC dataset.
arXiv Detail & Related papers (2024-09-19T09:33:33Z) - ChangeBind: A Hybrid Change Encoder for Remote Sensing Change Detection [16.62779899494721]
Change detection (CD) is a fundamental task in remote sensing (RS) which aims to detect the semantic changes between the same geographical regions at different time stamps.
We propose an effective Siamese-based framework to encode the semantic changes occurring in the bi-temporal RS images.
arXiv Detail & Related papers (2024-04-26T17:47:14Z) - ELGC-Net: Efficient Local-Global Context Aggregation for Remote Sensing Change Detection [65.59969454655996]
We propose an efficient change detection framework, ELGC-Net, which leverages rich contextual information to precisely estimate change regions.
Our proposed ELGC-Net sets a new state-of-the-art performance in remote sensing change detection benchmarks.
We also introduce ELGC-Net-LW, a lighter variant with significantly reduced computational complexity, suitable for resource-constrained settings.
arXiv Detail & Related papers (2024-03-26T17:46:25Z) - MS-Former: Memory-Supported Transformer for Weakly Supervised Change
Detection with Patch-Level Annotations [50.79913333804232]
We propose a memory-supported transformer (MS-Former) for weakly supervised change detection.
MS-Former consists of a bi-directional attention block (BAB) and a patch-level supervision scheme (PSS)
Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method in the change detection task.
arXiv Detail & Related papers (2023-11-16T09:57:29Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - VcT: Visual change Transformer for Remote Sensing Image Change Detection [16.778418602705287]
We propose a novel Visual change Transformer (VcT) model for visual change detection problem.
Top-K reliable tokens can be mined from the map and refined by using the clustering algorithm.
Extensive experiments on multiple benchmark datasets validated the effectiveness of our proposed VcT model.
arXiv Detail & Related papers (2023-10-17T17:25:31Z) - SwinV2DNet: Pyramid and Self-Supervision Compounded Feature Learning for
Remote Sensing Images Change Detection [12.727650696327878]
We propose an end-to-end compounded dense network SwinV2DNet to inherit advantages of transformer and CNN.
It captures the change relationship features through the densely connected Swin V2 backbone.
It provides the low-level pre-changed and post-changed features through a CNN branch.
arXiv Detail & Related papers (2023-08-22T03:31:52Z) - Vision Transformer with Convolutions Architecture Search [72.70461709267497]
We propose an architecture search method-Vision Transformer with Convolutions Architecture Search (VTCAS)
The high-performance backbone network searched by VTCAS introduces the desirable features of convolutional neural networks into the Transformer architecture.
It enhances the robustness of the neural network for object recognition, especially in the low illumination indoor scene.
arXiv Detail & Related papers (2022-03-20T02:59:51Z) - Semantic Change Detection with Asymmetric Siamese Networks [71.28665116793138]
Given two aerial images, semantic change detection aims to locate the land-cover variations and identify their change types with pixel-wise boundaries.
This problem is vital in many earth vision related tasks, such as precise urban planning and natural resource management.
We present an asymmetric siamese network (ASN) to locate and identify semantic changes through feature pairs obtained from modules of widely different structures.
arXiv Detail & Related papers (2020-10-12T13:26:30Z) - Looking for change? Roll the Dice and demand Attention [0.0]
We propose a reliable deep learning framework for the task of semantic change detection in high-resolution aerial images.
Our framework consists of a new loss function, new attention modules, new feature extraction building blocks, and a new backbone architecture.
We validate our approach by showing excellent performance and achieving state of the art score (F1 and Intersection over Union-hereafter IoU) on two building change detection datasets.
arXiv Detail & Related papers (2020-09-04T08:30:25Z) - DASNet: Dual attentive fully convolutional siamese networks for change
detection of high resolution satellite images [17.839181739760676]
The research objective is to identity the change information of interest and filter out the irrelevant change information as interference factors.
Recently, the rise of deep learning has provided new tools for change detection, which have yielded impressive results.
We propose a new method, namely, dual attentive fully convolutional Siamese networks (DASNet) for change detection in high-resolution images.
arXiv Detail & Related papers (2020-03-07T16:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.