Enhancing Evolving Domain Generalization through Dynamic Latent
Representations
- URL: http://arxiv.org/abs/2401.08464v1
- Date: Tue, 16 Jan 2024 16:16:42 GMT
- Title: Enhancing Evolving Domain Generalization through Dynamic Latent
Representations
- Authors: Binghui Xie, Yongqiang Chen, Jiaqi Wang, Kaiwen Zhou, Bo Han, Wei
Meng, James Cheng
- Abstract summary: We propose a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
MISTS learns both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information.
- Score: 47.3810472814143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization is a critical challenge for machine learning systems.
Prior domain generalization methods focus on extracting domain-invariant
features across several stationary domains to enable generalization to new
domains. However, in non-stationary tasks where new domains evolve in an
underlying continuous structure, such as time, merely extracting the invariant
features is insufficient for generalization to the evolving new domains.
Nevertheless, it is non-trivial to learn both evolving and invariant features
within a single model due to their conflicts. To bridge this gap, we build
causal models to characterize the distribution shifts concerning the two
patterns, and propose to learn both dynamic and invariant features via a new
framework called Mutual Information-Based Sequential Autoencoders (MISTS).
MISTS adopts information theoretic constraints onto sequential autoencoders to
disentangle the dynamic and invariant features, and leverage a domain adaptive
classifier to make predictions based on both evolving and invariant
information. Our experimental results on both synthetic and real-world datasets
demonstrate that MISTS succeeds in capturing both evolving and invariant
information, and present promising results in evolving domain generalization
tasks.
Related papers
- Causality-inspired Latent Feature Augmentation for Single Domain Generalization [13.735443005394773]
Single domain generalization (Single-DG) intends to develop a generalizable model with only one single training domain to perform well on other unknown target domains.
Under the domain-hungry configuration, how to expand the coverage of source domain and find intrinsic causal features across different distributions is the key to enhancing the models' generalization ability.
We propose a novel causality-inspired latent feature augmentation method for Single-DG by learning the meta-knowledge of feature-level transformation based on causal learning and interventions.
arXiv Detail & Related papers (2024-06-10T02:42:25Z) - HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization [69.33162366130887]
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
We introduce a novel method designed to supplement the model with domain-level and task-specific characteristics.
This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization.
arXiv Detail & Related papers (2024-01-18T04:23:21Z) - Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously [28.24666589680547]
We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
arXiv Detail & Related papers (2023-11-23T05:52:00Z) - Domain Generalization In Robust Invariant Representation [10.132611239890345]
In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
arXiv Detail & Related papers (2023-04-07T00:58:30Z) - Learning to Learn Domain-invariant Parameters for Domain Generalization [29.821634033299855]
Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains.
We propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB)
Our proposed method has achieved state-of-the-art performance with strong generalization capability.
arXiv Detail & Related papers (2022-11-04T07:19:34Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.