Lightweight Online Adaption for Time Series Foundation Model Forecasts
- URL: http://arxiv.org/abs/2502.12920v2
- Date: Wed, 26 Mar 2025 21:36:47 GMT
- Title: Lightweight Online Adaption for Time Series Foundation Model Forecasts
- Authors: Thomas L. Lee, William Toner, Rajkarn Singh, Artjom Joosen, Martin Asenov,
- Abstract summary: AdapTS is a lightweight mechanism for the online adaption of FM forecasts in response to online feedback.<n>We evaluate the performance of AdapTS in conjunction with several recent FMs across a suite of standard time series datasets.
- Score: 0.32622301272834525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models (FMs) have emerged as a promising approach for time series forecasting. While effective, FMs typically remain fixed during deployment due to the high computational costs of learning them online. Consequently, deployed FMs fail to adapt their forecasts to current data characteristics, despite the availability of online feedback from newly arriving data. This raises the question of whether FM performance can be enhanced by the efficient usage of this feedback. We propose AdapTS to answer this question. AdapTS is a lightweight mechanism for the online adaption of FM forecasts in response to online feedback. AdapTS consists of two parts: a) the AdapTS-Forecaster which is used to learn the current data distribution; and b) the AdapTS-Weighter which is used to combine the forecasts of the FM and the AdapTS-Forecaster. We evaluate the performance of AdapTS in conjunction with several recent FMs across a suite of standard time series datasets. In all of our experiments we find that using AdapTS improves performance. This work demonstrates how efficient usage of online feedback can be used to improve FM forecasts.
Related papers
- Performance of Zero-Shot Time Series Foundation Models on Cloud Data [0.32622301272834525]
Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting.<n>We demonstrate that many well-known FMs fail to generate meaningful or accurate zero-shot forecasts in this setting.<n>We also illustrate a number of interesting pathologies, including instances where FMs suddenly output seemingly erratic, random-looking forecasts.
arXiv Detail & Related papers (2025-02-18T15:28:02Z) - AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting [10.899510048905926]
We present adapters for managing intricate dependencies among features and quantifying uncertainty in predictions.
Experiments conducted on both synthetic and real-world datasets confirm the efficacy of adapters.
Our framework, AdaPTS, positions adapters as a modular, scalable, and effective solution.
arXiv Detail & Related papers (2025-02-14T15:46:19Z) - Fine-Tuning Foundation Models with Federated Learning for Privacy Preserving Medical Time Series Forecasting [0.32985979395737786]
Federated Learning (FL) provides a decentralized machine learning approach, where multiple devices or servers collaboratively train a model without sharing their raw data.<n>In this paper, we fine-tune time series FMs with Electrocardiogram (ECG) and Impedance Cardiography (ICG) data using different FL techniques.<n>Our empirical results demonstrated that while FL can be effective for fine-tuning FMs on time series forecasting tasks, its benefits depend on the data distribution across clients.
arXiv Detail & Related papers (2025-02-13T20:01:15Z) - Battling the Non-stationarity in Time Series Forecasting via Test-time Adaptation [39.7344214193566]
We introduce a pioneering test-time adaptation framework tailored for time series forecasting (TSF)<n>TAFAS, the proposed approach to TSF-TTA, flexibly adapts source forecasters to continuously shifting test distributions while preserving the core semantic information learned during pre-training.<n>The novel utilization of partially-observed ground truth and gated calibration module enables proactive, robust, and model-agnostic adaptation of source forecasters.
arXiv Detail & Related papers (2025-01-09T04:59:15Z) - Enabling Time-series Foundation Model for Building Energy Forecasting via Contrastive Curriculum Learning [12.19823790689484]
We study the adaptation of foundation models (FMs) to building energy forecasting tasks.<n>We propose a new textitcontrastive curriculum learning-based training method.<n>Experiments show that our method can improve the zero/few-shot performance by 14.6% compared to the existing FMs.
arXiv Detail & Related papers (2024-12-23T05:07:06Z) - Skip Tuning: Pre-trained Vision-Language Models are Effective and Efficient Adapters Themselves [123.07450481623124]
We propose Skip Tuning as a novel paradigm for adapting vision-language models to downstream tasks.<n>Unlike existing PT or adapter-based methods, Skip Tuning applies Layer-wise Skipping (LSkip) and Class-wise Skipping (CSkip) upon the FT baseline without introducing extra context vectors or adapter modules.
arXiv Detail & Related papers (2024-12-16T07:33:23Z) - FlowTS: Time Series Generation via Rectified Flow [67.41208519939626]
FlowTS is an ODE-based model that leverages rectified flow with straight-line transport in probability space.<n>For unconditional setting, FlowTS achieves state-of-the-art performance, with context FID scores of 0.019 and 0.011 on Stock and ETTh datasets.<n>For conditional setting, we have achieved superior performance in solar forecasting.
arXiv Detail & Related papers (2024-11-12T03:03:23Z) - Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - FedPFT: Federated Proxy Fine-Tuning of Foundation Models [55.58899993272904]
Adapting Foundation Models (FMs) for downstream tasks through Federated Learning (FL) emerges as a promising strategy for protecting data privacy and valuable FMs.
Existing methods fine-tune FM by allocating sub-FM to clients in FL, leading to suboptimal performance due to insufficient tuning and inevitable error accumulations of gradients.
We propose Federated Proxy Fine-Tuning (FedPFT), a novel method enhancing FMs adaptation in downstream tasks through FL by two key modules.
arXiv Detail & Related papers (2024-04-17T16:30:06Z) - To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO [68.69840111477367]
We present a principled framework for learning a small yet generalizable temperature prediction network (TempNet) to improve LFMs.
Our experiments on LLMs and CLIP models demonstrate that TempNet greatly improves the performance of existing solutions or models.
arXiv Detail & Related papers (2024-04-06T09:55:03Z) - Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models [73.48675708831328]
We propose a novel parameter and computation efficient tuning method for Multi-modal Large Language Models (MLLMs)
The Efficient Attention Skipping (EAS) method evaluates the attention redundancy and skips the less important MHAs to speed up inference.
The experiments show that EAS not only retains high performance and parameter efficiency, but also greatly speeds up inference speed.
arXiv Detail & Related papers (2024-03-22T14:20:34Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions [57.91211653592199]
The intersection of Foundation Model (FM) and Federated Learning (FL) presents a unique opportunity to unlock new possibilities for real-world applications.<n>On the one hand, FL, as a collaborative learning paradigm, help address challenges in FM development by expanding data availability.<n>On the other hand, FM, equipped with pre-trained knowledge and exceptional performance, can serve as a robust starting point for FL.
arXiv Detail & Related papers (2023-06-27T15:15:55Z) - Improving and generalizing flow-based generative models with minibatch
optimal transport [90.01613198337833]
We introduce the generalized conditional flow matching (CFM) technique for continuous normalizing flows (CNFs)
CFM features a stable regression objective like that used to train the flow in diffusion models but enjoys the efficient inference of deterministic flow models.
A variant of our objective is optimal transport CFM (OT-CFM), which creates simpler flows that are more stable to train and lead to faster inference.
arXiv Detail & Related papers (2023-02-01T14:47:17Z) - Boosting Factorization Machines via Saliency-Guided Mixup [125.15872106335692]
We present MixFM, inspired by Mixup, to generate auxiliary training data to boost Factorization machines (FMs)
We also put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM)
arXiv Detail & Related papers (2022-06-17T09:49:00Z) - Wireless-Enabled Asynchronous Federated Fourier Neural Network for
Turbulence Prediction in Urban Air Mobility (UAM) [101.80862265018033]
Urban air mobility (UAM) has been proposed in which vertical takeoff and landing (VTOL) aircraft are used to provide a ride-hailing service.
In UAM, aircraft can operate in designated air spaces known as corridors, that link the aerodromes.
A reliable communication network between GBSs and aircraft enables UAM to adequately utilize the airspace.
arXiv Detail & Related papers (2021-12-26T14:41:52Z) - Leaf-FM: A Learnable Feature Generation Factorization Machine for
Click-Through Rate Prediction [2.412497918389292]
We propose LeafFM model based on FM to generate new features from the original feature embedding by learning the transformation functions automatically.
Experiments are conducted on three real-world datasets and the results show Leaf-FM model outperforms standard FMs by a large margin.
arXiv Detail & Related papers (2021-07-26T08:29:18Z) - $FM^2$: Field-matrixed Factorization Machines for Recommender Systems [9.461169933697379]
We propose a novel approach to model the field information effectively and efficiently.
The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM)
arXiv Detail & Related papers (2021-02-20T00:03:37Z) - FMA-ETA: Estimating Travel Time Entirely Based on FFN With Attention [88.33372574562824]
We propose a novel framework based on feed-forward network (FFN) for ETA, FFN with Multi-factor self-Attention (FMA-ETA)
The novel Multi-factor self-attention mechanism is proposed to deal with different category features and aggregate the information purposefully.
Experiments show FMA-ETA is competitive with state-of-the-art methods in terms of the prediction accuracy with significantly better inference speed.
arXiv Detail & Related papers (2020-06-07T08:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.