An End-to-end Supervised Domain Adaptation Framework for Cross-Domain
Change Detection
- URL: http://arxiv.org/abs/2204.00154v1
- Date: Fri, 1 Apr 2022 01:35:30 GMT
- Title: An End-to-end Supervised Domain Adaptation Framework for Cross-Domain
Change Detection
- Authors: Jia Liu, Wenjie Xuan, Yuhang Gan, Juhua Liu, Bo Du
- Abstract summary: We propose an end-to-end Supervised Domain Adaptation framework for cross-domain Change Detection.
Our SDACD presents collaborative adaptations from both image and feature perspectives with supervised learning.
Our framework pushes several representative baseline models up to new State-Of-The-Art records.
- Score: 29.70695339406896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deep learning-based change detection methods try to elaborately
design complicated neural networks with powerful feature representations, but
ignore the universal domain shift induced by time-varying land cover changes,
including luminance fluctuations and season changes between pre-event and
post-event images, thereby producing sub-optimal results. In this paper, we
propose an end-to-end Supervised Domain Adaptation framework for cross-domain
Change Detection, namely SDACD, to effectively alleviate the domain shift
between bi-temporal images for better change predictions. Specifically, our
SDACD presents collaborative adaptations from both image and feature
perspectives with supervised learning. Image adaptation exploits generative
adversarial learning with cycle-consistency constraints to perform cross-domain
style transformation, effectively narrowing the domain gap in a two-side
generation fashion. As to feature adaptation, we extract domain-invariant
features to align different feature distributions in the feature space, which
could further reduce the domain gap of cross-domain images. To further improve
the performance, we combine three types of bi-temporal images for the final
change prediction, including the initial input bi-temporal images and two
generated bi-temporal images from the pre-event and post-event domains.
Extensive experiments and analyses on two benchmarks demonstrate the
effectiveness and universality of our proposed framework. Notably, our
framework pushes several representative baseline models up to new
State-Of-The-Art records, achieving 97.34% and 92.36% on the CDD and WHU
building datasets, respectively. The source code and models are publicly
available at https://github.com/Perfect-You/SDACD.
Related papers
- Self-supervised Domain-agnostic Domain Adaptation for Satellite Images [18.151134198549574]
We propose an self-supervised domain-agnostic domain adaptation (SS(DA)2) method to perform domain adaptation without such a domain definition.
We first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches.
Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics.
arXiv Detail & Related papers (2023-09-20T07:37:23Z) - PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain
Adaptative Semantic Segmentation [100.6343963798169]
Unsupervised Domain Adaptation (UDA) aims to enhance the generalization of the learned model to other domains.
We propose a unified pixel- and patch-wise self-supervised learning framework, called PiPa, for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2022-11-14T18:31:24Z) - PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation [53.428312630479816]
We observe that the Field of View (FoV) gap induces noticeable instance appearance differences between the source and target domains.
Motivated by the observations, we propose the textbfPosition-Invariant Transform (PIT) to better align images in different domains.
arXiv Detail & Related papers (2021-08-16T15:16:47Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - RPCL: A Framework for Improving Cross-Domain Detection with Auxiliary
Tasks [74.10747285807315]
Cross-Domain Detection (XDD) aims to train an object detector using labeled image from a source domain but have good performance in the target domain with only unlabeled images.
This paper provides a complementary solution to align domains by learning the same auxiliary tasks in both domains simultaneously.
arXiv Detail & Related papers (2021-04-18T02:56:19Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.