OneNet: Enhancing Time Series Forecasting Models under Concept Drift by
Online Ensembling
- URL: http://arxiv.org/abs/2309.12659v1
- Date: Fri, 22 Sep 2023 06:59:14 GMT
- Title: OneNet: Enhancing Time Series Forecasting Models under Concept Drift by
Online Ensembling
- Authors: Yi-Fan Zhang, Qingsong Wen, Xue Wang, Weiqi Chen, Liang Sun, Zhang
Zhang, Liang Wang, Rong Jin, Tieniu Tan
- Abstract summary: We propose textbfOnline textbfensembling textbfNetwork (OneNet) to address the concept drifting problem.
OneNet reduces online forecasting error by more than $mathbf50%$ compared to the State-Of-The-Art (SOTA) method.
- Score: 65.93805881841119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online updating of time series forecasting models aims to address the concept
drifting problem by efficiently updating forecasting models based on streaming
data. Many algorithms are designed for online time series forecasting, with
some exploiting cross-variable dependency while others assume independence
among variables. Given every data assumption has its own pros and cons in
online time series modeling, we propose \textbf{On}line \textbf{e}nsembling
\textbf{Net}work (OneNet). It dynamically updates and combines two models, with
one focusing on modeling the dependency across the time dimension and the other
on cross-variate dependency. Our method incorporates a reinforcement
learning-based approach into the traditional online convex programming
framework, allowing for the linear combination of the two models with
dynamically adjusted weights. OneNet addresses the main shortcoming of
classical online learning methods that tend to be slow in adapting to the
concept drift. Empirical results show that OneNet reduces online forecasting
error by more than $\mathbf{50\%}$ compared to the State-Of-The-Art (SOTA)
method. The code is available at \url{https://github.com/yfzhang114/OneNet}.
Related papers
- Addressing Concept Shift in Online Time Series Forecasting: Detect-then-Adapt [37.98336090671441]
Concept textbfDrift textbfDetection antextbfD textbfAdaptation (D3A)
It first detects drifting conception and then aggressively adapts the current model to the drifted concepts after the detection for rapid adaption.
It helps mitigate the data distribution gap, a critical factor contributing to train-test performance inconsistency.
arXiv Detail & Related papers (2024-03-22T04:44:43Z) - Federated Topic Model and Model Pruning Based on Variational Autoencoder [14.737942599204064]
Federated topic modeling allows multiple parties to jointly train models while protecting data privacy.
This paper proposes a method to establish a federated topic model while ensuring the privacy of each node, and use neural network model pruning to accelerate the model.
Experimental results show that the federated topic model pruning can greatly accelerate the model training speed while ensuring the model's performance.
arXiv Detail & Related papers (2023-11-01T06:00:14Z) - OFTER: An Online Pipeline for Time Series Forecasting [3.9962751777898955]
OFTER is a time series forecasting pipeline tailored for mid-sized multivariate time series.
It is specifically designed for online tasks, has an interpretable output, and is able to outperform several state-of-the art baselines.
The computational efficacy of the algorithm, its online nature, and its ability to operate in low signal-to-noise regimes render OFTER an ideal approach for financial time series problems.
arXiv Detail & Related papers (2023-04-08T00:18:03Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - A Study of Joint Graph Inference and Forecasting [13.340967777671565]
We study a recent class of models which uses graph neural networks (GNNs) to improve forecasting in multivariate time series.
By parameterizing a graph in a differentiable way, the models aim to improve forecasting quality.
arXiv Detail & Related papers (2021-09-10T16:34:35Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of
Time Series [77.47313102926017]
Radflow is a novel model for networks of time series that influence each other.
It embodies three key ideas: a recurrent neural network to obtain node embeddings that depend on time, the aggregation of the flow of influence from neighboring nodes with multi-head attention, and the multi-layer decomposition of time series.
We show that Radflow can learn different trends and seasonal patterns, that it is robust to missing nodes and edges, and that correlated temporal patterns among network neighbors reflect influence strength.
arXiv Detail & Related papers (2021-02-15T00:57:28Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Reinforcement Learning based dynamic weighing of Ensemble Models for
Time Series Forecasting [0.8399688944263843]
It is known that if models selected for data modelling are distinct (linear/non-linear, static/dynamic) and independent (minimally correlated) models, the accuracy of the predictions is improved.
Various approaches suggested in the literature to weigh the ensemble models use a static set of weights.
To address this issue, a Reinforcement Learning (RL) approach to dynamically assign and update weights of each of the models at different time instants.
arXiv Detail & Related papers (2020-08-20T10:40:42Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.