From W-Net to CDGAN: Bi-temporal Change Detection via Deep Learning
Techniques
- URL: http://arxiv.org/abs/2003.06583v1
- Date: Sat, 14 Mar 2020 09:24:08 GMT
- Title: From W-Net to CDGAN: Bi-temporal Change Detection via Deep Learning
Techniques
- Authors: Bin Hou, Qingjie Liu, Heng Wang, and Yunhong Wang
- Abstract summary: We propose an end-to-end dual-branch architecture termed as the W-Net, with each branch taking as input one of the two bi-temporal images.
We also apply the recently popular Generative Adversarial Network (GAN) in which our W-Net serves as the Generator.
To train our networks and also facilitate future research, we construct a large scale dataset by collecting images from Google Earth.
- Score: 43.58400031452662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional change detection methods usually follow the image differencing,
change feature extraction and classification framework, and their performance
is limited by such simple image domain differencing and also the hand-crafted
features. Recently, the success of deep convolutional neural networks (CNNs)
has widely spread across the whole field of computer vision for their powerful
representation abilities. In this paper, we therefore address the remote
sensing image change detection problem with deep learning techniques. We
firstly propose an end-to-end dual-branch architecture, termed as the W-Net,
with each branch taking as input one of the two bi-temporal images as in the
traditional change detection models. In this way, CNN features with more
powerful representative abilities can be obtained to boost the final detection
performance. Also, W-Net performs differencing in the feature domain rather
than in the traditional image domain, which greatly alleviates loss of useful
information for determining the changes. Furthermore, by reformulating change
detection as an image translation problem, we apply the recently popular
Generative Adversarial Network (GAN) in which our W-Net serves as the
Generator, leading to a new GAN architecture for change detection which we call
CDGAN. To train our networks and also facilitate future research, we construct
a large scale dataset by collecting images from Google Earth and provide
carefully manually annotated ground truths. Experiments show that our proposed
methods can provide fine-grained change detection results superior to the
existing state-of-the-art baselines.
Related papers
- ChangeBind: A Hybrid Change Encoder for Remote Sensing Change Detection [16.62779899494721]
Change detection (CD) is a fundamental task in remote sensing (RS) which aims to detect the semantic changes between the same geographical regions at different time stamps.
We propose an effective Siamese-based framework to encode the semantic changes occurring in the bi-temporal RS images.
arXiv Detail & Related papers (2024-04-26T17:47:14Z) - Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery [6.5026098921977145]
We design the Change Guiding Network (CGNet) to tackle the insufficient expression problem of change features.
CGNet generates change maps with rich semantic information to guide multi-scale feature fusion.
A self-attention module named Change Guide Module (CGM) can effectively capture the long-distance dependency among pixels.
arXiv Detail & Related papers (2024-04-14T08:09:33Z) - ELGC-Net: Efficient Local-Global Context Aggregation for Remote Sensing Change Detection [65.59969454655996]
We propose an efficient change detection framework, ELGC-Net, which leverages rich contextual information to precisely estimate change regions.
Our proposed ELGC-Net sets a new state-of-the-art performance in remote sensing change detection benchmarks.
We also introduce ELGC-Net-LW, a lighter variant with significantly reduced computational complexity, suitable for resource-constrained settings.
arXiv Detail & Related papers (2024-03-26T17:46:25Z) - TransY-Net:Learning Fully Transformer Networks for Change Detection of
Remote Sensing Images [64.63004710817239]
We propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD.
It improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner.
Our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks.
arXiv Detail & Related papers (2023-10-22T07:42:19Z) - VcT: Visual change Transformer for Remote Sensing Image Change Detection [16.778418602705287]
We propose a novel Visual change Transformer (VcT) model for visual change detection problem.
Top-K reliable tokens can be mined from the map and refined by using the clustering algorithm.
Extensive experiments on multiple benchmark datasets validated the effectiveness of our proposed VcT model.
arXiv Detail & Related papers (2023-10-17T17:25:31Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z) - Unsupervised Change Detection in Satellite Images with Generative
Adversarial Network [20.81970476609318]
We propose a novel change detection framework utilizing a special neural network architecture -- Generative Adversarial Network (GAN) to generate better coregistered images.
The optimized GAN model would produce better coregistered images where changes can be easily spotted and then the change map can be presented through a comparison strategy.
arXiv Detail & Related papers (2020-09-08T10:26:04Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.