Continuously Indexed Domain Adaptation
- URL: http://arxiv.org/abs/2007.01807v2
- Date: Sun, 30 Aug 2020 02:31:43 GMT
- Title: Continuously Indexed Domain Adaptation
- Authors: Hao Wang and Hao He and Dina Katabi
- Abstract summary: We propose the first method for continuously indexed domain adaptation.
Our approach combines traditional adversarial adaptation with a novel discriminator that models the encoding-conditioned domain index distribution.
Our empirical results show that our approach outperforms the state-of-the-art domain adaption methods on both synthetic and real-world medical datasets.
- Score: 24.09142831355124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing domain adaptation focuses on transferring knowledge between domains
with categorical indices (e.g., between datasets A and B). However, many tasks
involve continuously indexed domains. For example, in medical applications, one
often needs to transfer disease analysis and prediction across patients of
different ages, where age acts as a continuous domain index. Such tasks are
challenging for prior domain adaptation methods since they ignore the
underlying relation among domains. In this paper, we propose the first method
for continuously indexed domain adaptation. Our approach combines traditional
adversarial adaptation with a novel discriminator that models the
encoding-conditioned domain index distribution. Our theoretical analysis
demonstrates the value of leveraging the domain index to generate invariant
features across a continuous range of domains. Our empirical results show that
our approach outperforms the state-of-the-art domain adaption methods on both
synthetic and real-world medical datasets.
Related papers
- FairDomain: Achieving Fairness in Cross-Domain Medical Image Segmentation and Classification [24.985944558474166]
This paper presents a pioneering systemic study into fairness under domain shifts.
We employ state-of-the-art domain adaptation (DA) and generalization (DG) algorithms for both medical segmentation and classification tasks.
We also introduce a novel plug-and-play identity fair attention (FIA) module that adapts to various DA and DG algorithms to improve fairness by using self-attention to adjust demographic attributes.
arXiv Detail & Related papers (2024-07-11T18:52:32Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting [35.83485358725357]
Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
Existing domain adaptation methods tend to focus on inter-dataset differences while overlooking the intra-differences within the same dataset.
We propose a Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns domain-agnostic factors between domains.
arXiv Detail & Related papers (2023-08-10T02:59:40Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain
Adaptation [8.46755868848403]
We propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data.
Our theoretical analysis shows that our framework finds the optimal domain index at equilibrium.
arXiv Detail & Related papers (2023-02-06T04:38:14Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.