Lightweight Online Adaption for Time Series Foundation Model Forecasts
- URL: http://arxiv.org/abs/2502.12920v1
- Date: Tue, 18 Feb 2025 15:01:02 GMT
- Title: Lightweight Online Adaption for Time Series Foundation Model Forecasts
- Authors: Thomas L. Lee, William Toner, Rajkarn Singh, Artjom Joosem, Martin Asenov,
- Abstract summary: AdapTS is a lightweight mechanism for the online adaption of FM forecasts in response to online feedback.
We evaluate the performance of AdapTS in conjunction with several recent FMs across a suite of standard time series datasets.
- Score: 0.32622301272834525
- License:
- Abstract: Foundation models (FMs) have emerged as a promising approach for time series forecasting. While effective, FMs typically remain fixed during deployment due to the high computational costs of learning them online. Consequently, deployed FMs fail to adapt their forecasts to current data characteristics, despite the availability of online feedback from newly arriving data. This raises the question of whether FM performance can be enhanced by the efficient usage of this feedback. We propose AdapTS to answer this question. AdapTS is a lightweight mechanism for the online adaption of FM forecasts in response to online feedback. AdapTS consists of two parts: a) the AdapTS-Forecaster which is used to learn the current data distribution; and b) the AdapTS-Weighter which is used to combine the forecasts of the FM and the AdapTS-Forecaster. We evaluate the performance of AdapTS in conjunction with several recent FMs across a suite of standard time series datasets. In all of our experiments we find that using AdapTS improves performance. This work demonstrates how efficient usage of online feedback can be used to improve FM forecasts.
Related papers
- Performance of Zero-Shot Time Series Foundation Models on Cloud Data [0.32622301272834525]
Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting.
We demonstrate that many well-known FMs fail to generate meaningful or accurate zero-shot forecasts in this setting.
We also illustrate a number of interesting pathologies, including instances where FMs suddenly output seemingly erratic, random-looking forecasts.
arXiv Detail & Related papers (2025-02-18T15:28:02Z) - AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting [10.899510048905926]
We present adapters for managing intricate dependencies among features and quantifying uncertainty in predictions.
Experiments conducted on both synthetic and real-world datasets confirm the efficacy of adapters.
Our framework, AdaPTS, positions adapters as a modular, scalable, and effective solution.
arXiv Detail & Related papers (2025-02-14T15:46:19Z) - Battling the Non-stationarity in Time Series Forecasting via Test-time Adaptation [39.7344214193566]
We introduce a pioneering test-time adaptation framework tailored for time series forecasting (TSF)
TAFAS, the proposed approach to TSF-TTA, flexibly adapts source forecasters to continuously shifting test distributions while preserving the core semantic information learned during pre-training.
The novel utilization of partially-observed ground truth and gated calibration module enables proactive, robust, and model-agnostic adaptation of source forecasters.
arXiv Detail & Related papers (2025-01-09T04:59:15Z) - Enabling Time-series Foundation Model for Building Energy Forecasting via Contrastive Curriculum Learning [12.19823790689484]
We study the adaptation of foundation models (FMs) to building energy forecasting tasks.
We propose a new textitcontrastive curriculum learning-based training method.
Experiments show that our method can improve the zero/few-shot performance by 14.6% compared to the existing FMs.
arXiv Detail & Related papers (2024-12-23T05:07:06Z) - Skip Tuning: Pre-trained Vision-Language Models are Effective and Efficient Adapters Themselves [123.07450481623124]
We propose Skip Tuning as a novel paradigm for adapting vision-language models to downstream tasks.
Unlike existing PT or adapter-based methods, Skip Tuning applies Layer-wise Skipping (LSkip) and Class-wise Skipping (CSkip) upon the FT baseline without introducing extra context vectors or adapter modules.
arXiv Detail & Related papers (2024-12-16T07:33:23Z) - Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models [73.48675708831328]
We propose a novel parameter and computation efficient tuning method for Multi-modal Large Language Models (MLLMs)
The Efficient Attention Skipping (EAS) method evaluates the attention redundancy and skips the less important MHAs to speed up inference.
The experiments show that EAS not only retains high performance and parameter efficiency, but also greatly speeds up inference speed.
arXiv Detail & Related papers (2024-03-22T14:20:34Z) - Wireless-Enabled Asynchronous Federated Fourier Neural Network for
Turbulence Prediction in Urban Air Mobility (UAM) [101.80862265018033]
Urban air mobility (UAM) has been proposed in which vertical takeoff and landing (VTOL) aircraft are used to provide a ride-hailing service.
In UAM, aircraft can operate in designated air spaces known as corridors, that link the aerodromes.
A reliable communication network between GBSs and aircraft enables UAM to adequately utilize the airspace.
arXiv Detail & Related papers (2021-12-26T14:41:52Z) - Leaf-FM: A Learnable Feature Generation Factorization Machine for
Click-Through Rate Prediction [2.412497918389292]
We propose LeafFM model based on FM to generate new features from the original feature embedding by learning the transformation functions automatically.
Experiments are conducted on three real-world datasets and the results show Leaf-FM model outperforms standard FMs by a large margin.
arXiv Detail & Related papers (2021-07-26T08:29:18Z) - FMA-ETA: Estimating Travel Time Entirely Based on FFN With Attention [88.33372574562824]
We propose a novel framework based on feed-forward network (FFN) for ETA, FFN with Multi-factor self-Attention (FMA-ETA)
The novel Multi-factor self-attention mechanism is proposed to deal with different category features and aggregate the information purposefully.
Experiments show FMA-ETA is competitive with state-of-the-art methods in terms of the prediction accuracy with significantly better inference speed.
arXiv Detail & Related papers (2020-06-07T08:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.