Continuous Evolution Pool: Taming Recurring Concept Drift in Online Time Series Forecasting
- URL: http://arxiv.org/abs/2506.14790v1
- Date: Wed, 28 May 2025 03:27:49 GMT
- Title: Continuous Evolution Pool: Taming Recurring Concept Drift in Online Time Series Forecasting
- Authors: Tianxiang Zhan, Ming Jin, Yuanpeng He, Yuxuan Liang, Yong Deng, Shirui Pan,
- Abstract summary: Continuous Evolution Pool (CEP) is a pooling mechanism that stores different instances of forecasters for different concepts.<n>CEP effectively retains the knowledge of different concepts.<n>In the scenario of online forecasting with recurring concepts, CEP significantly enhances the prediction results.
- Score: 58.448663215248565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recurring concept drift, a type of concept drift in which previously observed data patterns reappear after some time, is one of the most prevalent types of concept drift in time series. As time progresses, concept drift occurs and previously encountered concepts are forgotten, thereby leading to a decline in the accuracy of online predictions. Existing solutions employ parameter updating techniques to delay forgetting; however, this may result in the loss of some previously learned knowledge while neglecting the exploration of knowledge retention mechanisms. To retain all conceptual knowledge and fully utilize it when the concepts recur, we propose the Continuous Evolution Pool (CEP), a pooling mechanism that stores different instances of forecasters for different concepts. Our method first selects the forecaster nearest to the test sample and then learns the features from its neighboring samples - a process we refer to as the retrieval. If there are insufficient neighboring samples, it indicates that a new concept has emerged, and a new model will evolve from the current nearest sample to the pool to store the knowledge of the concept. Simultaneously, the elimination mechanism will enable outdated knowledge to be cleared to ensure the prediction effect of the forecasters. Experiments on different architectural models and eight real datasets demonstrate that CEP effectively retains the knowledge of different concepts. In the scenario of online forecasting with recurring concepts, CEP significantly enhances the prediction results.
Related papers
- I Predict Therefore I Am: Is Next Token Prediction Enough to Learn Human-Interpretable Concepts from Data? [76.15163242945813]
Large language models (LLMs) have led many to conclude that they exhibit a form of intelligence.<n>We introduce a novel generative model that generates tokens on the basis of human-interpretable concepts represented as latent discrete variables.
arXiv Detail & Related papers (2025-03-12T01:21:17Z) - Diverse Concept Proposals for Concept Bottleneck Models [23.395270888378594]
Concept bottleneck models are interpretable predictive models that are often used in domains where model trust is a key priority, such as healthcare.<n>Our proposed approach identifies a number of predictive concepts that explain the data.<n>By offering multiple alternative explanations, we allow the human expert to choose the one that best aligns with their expectation.
arXiv Detail & Related papers (2024-12-24T00:12:34Z) - Proactive Model Adaptation Against Concept Drift for Online Time Series Forecasting [23.50574069148193]
We present a novel proactive model adaptation framework for online time series forecasting.<n> Proceed first estimates the concept drift between the recently used training samples and the current test sample.<n>It then employs an adaptation generator to efficiently translate the estimated drift into parameter adjustments.
arXiv Detail & Related papers (2024-12-11T14:57:10Z) - Concept-driven Off Policy Evaluation [2.789652596206117]
We develop a family of concept-based OPE estimators, proving that they remain unbiased and reduce variance when concepts are known and predefined.<n>Experiments with synthetic and real-world datasets show that both known and learned concept-based estimators significantly improve OPE performance.<n>Unlike other OPE methods, concept-based estimators are easily interpretable and allow for targeted interventions on specific concepts, further enhancing the quality of these estimators.
arXiv Detail & Related papers (2024-11-28T22:15:06Z) - MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction [57.483718822429346]
MulCPred is proposed that explains its predictions based on multi-modal concepts represented by training samples.
MulCPred is evaluated on multiple datasets and tasks.
arXiv Detail & Related papers (2024-09-14T14:15:28Z) - Online Drift Detection with Maximum Concept Discrepancy [13.48123472458282]
We propose MCD-DD, a novel concept drift detection method based on maximum concept discrepancy.
Our method can adaptively identify varying forms of concept drift by contrastive learning of concept embeddings.
arXiv Detail & Related papers (2024-07-07T13:57:50Z) - Predictive Churn with the Set of Good Models [61.00058053669447]
This paper explores connections between two seemingly unrelated concepts of predictive inconsistency.<n>The first, known as predictive multiplicity, occurs when models that perform similarly produce conflicting predictions for individual samples.<n>The second concept, predictive churn, examines the differences in individual predictions before and after model updates.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Performative Time-Series Forecasting [64.03865043422597]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.<n>We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.<n>We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Handling Concept Drift in Global Time Series Forecasting [10.732102570751392]
We propose two new concept drift handling methods, namely: Error Contribution Weighting (ECW) and Gradient Descent Weighting (GDW)
These methods use two forecasting models which are separately trained with the most recent series and all series, and finally, the weighted average of the forecasts provided by the two models are considered as the final forecasts.
arXiv Detail & Related papers (2023-04-04T03:46:25Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.