Prompting-based Temporal Domain Generalization
- URL: http://arxiv.org/abs/2310.02473v2
- Date: Thu, 15 Feb 2024 19:22:06 GMT
- Title: Prompting-based Temporal Domain Generalization
- Authors: Sepidehsadat Hosseini, Mengyao Zhai, Hossein Hajimirsadegh, Frederick
Tung
- Abstract summary: This paper presents a novel prompting-based approach to temporal domain generalization.
Our method adapts a trained model to temporal drift by learning global prompts, domain-specific prompts, and drift-aware prompts.
Experiments on classification, regression, and time series forecasting tasks demonstrate the generality of the proposed approach.
- Score: 10.377683220196873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning traditionally assumes that the training and testing data are
distributed independently and identically. However, in many real-world
settings, the data distribution can shift over time, leading to poor
generalization of trained models in future time periods. This paper presents a
novel prompting-based approach to temporal domain generalization that is
parameter-efficient, time-efficient, and does not require access to future data
during training. Our method adapts a trained model to temporal drift by
learning global prompts, domain-specific prompts, and drift-aware prompts that
capture underlying temporal dynamics. Experiments on classification,
regression, and time series forecasting tasks demonstrate the generality of the
proposed approach. The code repository will be publicly shared.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction [22.265095967530296]
FlashST is a framework that adapts pre-trained models to generalize specific characteristics of diverse datasets.
It captures a shift of pre-training and downstream data, facilitating effective adaptation to diverse scenarios.
Empirical evaluations demonstrate the effectiveness of FlashST across different scenarios.
arXiv Detail & Related papers (2024-05-28T07:18:52Z) - A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Mixed moving average field guided learning for spatio-temporal data [0.0]
We define a novel Bayesian-temporal embedding and a theory-guided machine learning approach to make ensemble forecasts.
We use Lipschitz predictors to determine fixed-time and any-time PAC in the batch learning setting.
We then test the performance of our learning methodology by using linear predictors and data sets simulated from a dependence- Ornstein-Uhlenbeck process.
arXiv Detail & Related papers (2023-01-02T16:11:05Z) - Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time [69.77704012415845]
Temporal shifts can considerably degrade performance of machine learning models deployed in the real world.
We benchmark 13 prior approaches, including methods in domain generalization, continual learning, self-supervised learning, and ensemble learning.
Under both evaluation strategies, we observe an average performance drop of 20% from in-distribution to out-of-distribution data.
arXiv Detail & Related papers (2022-11-25T17:07:53Z) - Self-Adaptive Forecasting for Improved Deep Learning on Non-Stationary
Time-Series [20.958959332978726]
SAF integrates a self-adaptation stage prior to forecasting based on backcasting'
Our method enables efficient adaptation of encoded representations to evolving distributions, leading to superior generalization.
On synthetic and real-world datasets in domains where time-series data are known to be notoriously non-stationary, such as healthcare and finance, we demonstrate a significant benefit of SAF.
arXiv Detail & Related papers (2022-02-04T21:54:10Z) - Training for the Future: A Simple Gradient Interpolation Loss to
Generalize Along Time [26.261277497201565]
In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time.
We propose a simple method that starts with a model with time-sensitive parameters but regularizes its temporal complexity using a Gradient Interpolation (GI) loss.
arXiv Detail & Related papers (2021-08-15T11:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.