Deep Siamese Domain Adaptation Convolutional Neural Network for
Cross-domain Change Detection in Multispectral Images
- URL: http://arxiv.org/abs/2004.05745v1
- Date: Mon, 13 Apr 2020 02:15:04 GMT
- Title: Deep Siamese Domain Adaptation Convolutional Neural Network for
Cross-domain Change Detection in Multispectral Images
- Authors: Hongruixuan Chen and Chen Wu and Bo Du and Liangepei Zhang
- Abstract summary: We propose a novel deep siamese domain adaptation convolutional neural network (DSDANet) architecture for cross-domain change detection.
To the best of our knowledge, it is the first time that such a domain adaptation-based deep network is proposed for change detection.
- Score: 28.683734356006262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning has achieved promising performance in the change
detection task. However, the deep models are task-specific and data set bias
often exists, thus it is difficult to transfer a network trained on one
multi-temporal data set (source domain) to another multi-temporal data set with
very limited (even no) labeled data (target domain). In this paper, we propose
a novel deep siamese domain adaptation convolutional neural network (DSDANet)
architecture for cross-domain change detection. In DSDANet, a siamese
convolutional neural network first extracts spatial-spectral features from
multi-temporal images. Then, through multiple kernel maximum mean discrepancy
(MK-MMD), the learned feature representation is embedded into a reproducing
kernel Hilbert space (RKHS), in which the distribution of two domains can be
explicitly matched. By optimizing the network parameters and kernel
coefficients with the source labeled data and target unlabeled data, the
DSDANet can learn transferrable feature representation that can bridge the
discrepancy between two domains. To the best of our knowledge, it is the first
time that such a domain adaptation-based deep network is proposed for change
detection. The theoretical analysis and experimental results demonstrate the
effectiveness and potential of the proposed method.
Related papers
- Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Dsfer-Net: A Deep Supervision and Feature Retrieval Network for Bitemporal Change Detection Using Modern Hopfield Networks [35.415260892693745]
We propose a Deep Supervision and FEature Retrieval network (Dsfer-Net) for bitemporal change detection.
Specifically, the highly representative deep features of bitemporal images are jointly extracted through a fully convolutional Siamese network.
Our end-to-end network establishes a novel framework by aggregating retrieved features and feature pairs from different layers.
arXiv Detail & Related papers (2023-04-03T16:01:03Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - TDACNN: Target-domain-free Domain Adaptation Convolutional Neural
Network for Drift Compensation in Gas Sensors [6.451060076703026]
In this paper, deep learning based on a target-domain-free domain adaptation convolutional neural network (TDACNN) is proposed.
The main concept is that CNNs extract not only the domain-specific features of samples but also the domain-invariant features underlying both the source and target domains.
Experiments on two datasets drift under different settings demonstrate the superiority of TDACNN compared with several state-of-the-art methods.
arXiv Detail & Related papers (2021-10-14T16:30:17Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - DSDANet: Deep Siamese Domain Adaptation Convolutional Neural Network for
Cross-domain Change Detection [44.05317423742678]
We propose a novel deep siamese domain adaptation convolutional neural network architecture for cross-domain change detection.
To the best of our knowledge, it is the first time that such a domain adaptation-based deep network is proposed for change detection.
arXiv Detail & Related papers (2020-06-16T15:00:54Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.