Causal Prototype-inspired Contrast Adaptation for Unsupervised Domain
Adaptive Semantic Segmentation of High-resolution Remote Sensing Imagery
- URL: http://arxiv.org/abs/2403.03704v1
- Date: Wed, 6 Mar 2024 13:39:18 GMT
- Title: Causal Prototype-inspired Contrast Adaptation for Unsupervised Domain
Adaptive Semantic Segmentation of High-resolution Remote Sensing Imagery
- Authors: Jingru Zhu, Ya Guo, Geng Sun, Liang Hong and Jie Chen
- Abstract summary: We propose a prototype-inspired contrast adaptation (CPCA) method to explore the invariant causal mechanisms between different HRSIs domains and their semantic labels.
It disentangles causal features and bias features from the source and target domain images through a causal feature disentanglement module.
To further de-correlate causal and bias features, a causal intervention module is introduced to intervene on the bias features to generate counterfactual unbiased samples.
- Score: 8.3316355693186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation of high-resolution remote sensing imagery (HRSI)
suffers from the domain shift, resulting in poor performance of the model in
another unseen domain. Unsupervised domain adaptive (UDA) semantic segmentation
aims to adapt the semantic segmentation model trained on the labeled source
domain to an unlabeled target domain. However, the existing UDA semantic
segmentation models tend to align pixels or features based on statistical
information related to labels in source and target domain data, and make
predictions accordingly, which leads to uncertainty and fragility of prediction
results. In this paper, we propose a causal prototype-inspired contrast
adaptation (CPCA) method to explore the invariant causal mechanisms between
different HRSIs domains and their semantic labels. It firstly disentangles
causal features and bias features from the source and target domain images
through a causal feature disentanglement module. Then, a causal prototypical
contrast module is used to learn domain invariant causal features. To further
de-correlate causal and bias features, a causal intervention module is
introduced to intervene on the bias features to generate counterfactual
unbiased samples. By forcing the causal features to meet the principles of
separability, invariance and intervention, CPCA can simulate the causal factors
of source and target domains, and make decisions on the target domain based on
the causal features, which can observe improved generalization ability.
Extensive experiments under three cross-domain tasks indicate that CPCA is
remarkably superior to the state-of-the-art methods.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - CILF:Causality Inspired Learning Framework for Out-of-Distribution
Vehicle Trajectory Prediction [0.0]
Trajectory prediction is critical for autonomous driving vehicles.
Most existing methods tend to model the correlation between history trajectory (input) and future trajectory (output)
arXiv Detail & Related papers (2023-07-11T05:21:28Z) - Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning [26.544837987747766]
We propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast learning.
The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain.
arXiv Detail & Related papers (2023-06-02T15:09:19Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Domain Adaptive Object Detection via Asymmetric Tri-way Faster-RCNN [15.976076198305414]
Unsupervised domain adaptive object detection is proposed to reduce the disparity between domains, where the source domain is label-rich while the target domain is label-agnostic.
The asymmetric structure consisting of a chief net and an independent ancillary net essentially overcomes the parameter sharing aroused source risk collapse.
The adaption of the proposed ATF detector is guaranteed.
arXiv Detail & Related papers (2020-07-03T09:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.