Generalizing to Evolving Domains with Latent Structure-Aware Sequential
Autoencoder
- URL: http://arxiv.org/abs/2205.07649v1
- Date: Mon, 16 May 2022 13:11:29 GMT
- Title: Generalizing to Evolving Domains with Latent Structure-Aware Sequential
Autoencoder
- Authors: Tiexin Qin and Shiqi Wang and Haoliang Li
- Abstract summary: We introduce a probabilistic framework called Latent Structure-aware Sequential Autoencoder (LSSAE) to tackle the problem of evolving domain generalization.
Experimental results on both synthetic and real-world datasets show that LSSAE can lead to superior performances.
- Score: 32.46804768486719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization aims to improve the generalization capability of
machine learning systems to out-of-distribution (OOD) data. Existing domain
generalization techniques embark upon stationary and discrete environments to
tackle the generalization issue caused by OOD data. However, many real-world
tasks in non-stationary environments (e.g. self-driven car system, sensor
measures) involve more complex and continuously evolving domain drift, which
raises new challenges for the problem of domain generalization. In this paper,
we formulate the aforementioned setting as the problem of evolving domain
generalization. Specifically, we propose to introduce a probabilistic framework
called Latent Structure-aware Sequential Autoencoder (LSSAE) to tackle the
problem of evolving domain generalization via exploring the underlying
continuous structure in the latent space of deep neural networks, where we aim
to identify two major factors namely covariate shift and concept shift
accounting for distribution shift in non-stationary environments. Experimental
results on both synthetic and real-world datasets show that LSSAE can lead to
superior performances based on the evolving domain generalization setting.
Related papers
- PointDGMamba: Domain Generalization of Point Cloud Classification via Generalized State Space Model [77.00221501105788]
Domain Generalization (DG) has been recently explored to improve the generalizability of point cloud classification (PCC) models toward unseen domains.
We present the first work that studies the generalizability of state space models (SSMs) in DG PCC.
We propose a novel framework, PointDGMamba, that excels in strong generalizability toward unseen domains.
arXiv Detail & Related papers (2024-08-24T12:53:48Z) - Generalizing across Temporal Domains with Koopman Operators [15.839454056986446]
In this study, we contribute novel theoretic results that align conditional distribution leads to the reduction of generalization bounds.
Our analysis serves as a key motivation for solving the Temporal Domain Generalization (TDG) problem through the application of Koopman Neural Operators.
arXiv Detail & Related papers (2024-02-12T17:45:40Z) - Enhancing Evolving Domain Generalization through Dynamic Latent
Representations [47.3810472814143]
We propose a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
MISTS learns both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information.
arXiv Detail & Related papers (2024-01-16T16:16:42Z) - Complementary Domain Adaptation and Generalization for Unsupervised
Continual Domain Shift Learning [4.921899151930171]
Unsupervised continual domain shift learning is a significant challenge in real-world applications.
We propose Complementary Domain Adaptation and Generalization (CoDAG), a simple yet effective learning framework.
Our approach is model-agnostic, meaning that it is compatible with any existing domain adaptation and generalization algorithms.
arXiv Detail & Related papers (2023-03-28T09:05:15Z) - Foresee What You Will Learn: Data Augmentation for Domain Generalization
in Non-Stationary Environments [14.344721944207599]
Existing domain generalization aims to learn a generalizable model to perform well even on unseen domains.
We propose Directional Domain Augmentation (DDA), which simulates the unseen target features by mapping source data as augmentations through a domain transformer.
We evaluate the proposed method on both synthetic datasets and realworld datasets, and empirical results show that our approach can outperform other existing methods.
arXiv Detail & Related papers (2023-01-19T01:51:37Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.