Generalizing across Temporal Domains with Koopman Operators
- URL: http://arxiv.org/abs/2402.07834v2
- Date: Thu, 15 Feb 2024 18:28:51 GMT
- Title: Generalizing across Temporal Domains with Koopman Operators
- Authors: Qiuhao Zeng, Wei Wang, Fan Zhou, Gezheng Xu, Ruizhi Pu, Changjian
Shui, Christian Gagne, Shichun Yang, Boyu Wang, Charles X. Ling
- Abstract summary: In this study, we contribute novel theoretic results that align conditional distribution leads to the reduction of generalization bounds.
Our analysis serves as a key motivation for solving the Temporal Domain Generalization (TDG) problem through the application of Koopman Neural Operators.
- Score: 15.839454056986446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of domain generalization, the task of constructing a predictive
model capable of generalizing to a target domain without access to target data
remains challenging. This problem becomes further complicated when considering
evolving dynamics between domains. While various approaches have been proposed
to address this issue, a comprehensive understanding of the underlying
generalization theory is still lacking. In this study, we contribute novel
theoretic results that aligning conditional distribution leads to the reduction
of generalization bounds. Our analysis serves as a key motivation for solving
the Temporal Domain Generalization (TDG) problem through the application of
Koopman Neural Operators, resulting in Temporal Koopman Networks (TKNets). By
employing Koopman Operators, we effectively address the time-evolving
distributions encountered in TDG using the principles of Koopman theory, where
measurement functions are sought to establish linear transition relations
between evolving domains. Through empirical evaluations conducted on synthetic
and real-world datasets, we validate the effectiveness of our proposed
approach.
Related papers
- Continuous Temporal Domain Generalization [17.529690717937267]
Temporal Domain Generalization (TDG) addresses the challenge of training predictive models under temporally varying data distributions.
This work formalizes the concept of Continuous Temporal Domain Generalization (CTDG), where domain data are derived from continuous times and are collected at arbitrary times.
arXiv Detail & Related papers (2024-05-25T05:52:04Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - Label Alignment Regularization for Distribution Shift [63.228879525056904]
Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
We propose a regularization method for unsupervised domain adaptation that encourages alignment between the predictions in the target domain and its top singular vectors.
We report improved performance over domain adaptation baselines in well-known tasks such as MNIST-USPS domain adaptation and cross-lingual sentiment analysis.
arXiv Detail & Related papers (2022-11-27T22:54:48Z) - Evolving Domain Generalization [14.072505551647813]
We formulate and study the emphevolving domain generalization (EDG) scenario, which exploits not only the source data but also their evolving pattern to generate a model for the unseen task.
Our theoretical result reveals the benefits of modeling the relation between two consecutive tasks by learning a globally consistent directional mapping function.
In practice, our analysis also suggests solving the DDG problem in a meta-learning manner, which leads to emphdirectional network, the first method for the DDG problem.
arXiv Detail & Related papers (2022-05-31T18:28:15Z) - Generalizing to Evolving Domains with Latent Structure-Aware Sequential
Autoencoder [32.46804768486719]
We introduce a probabilistic framework called Latent Structure-aware Sequential Autoencoder (LSSAE) to tackle the problem of evolving domain generalization.
Experimental results on both synthetic and real-world datasets show that LSSAE can lead to superior performances.
arXiv Detail & Related papers (2022-05-16T13:11:29Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.