EAMDrift: An interpretable self retrain model for time series
- URL: http://arxiv.org/abs/2305.19837v1
- Date: Wed, 31 May 2023 13:25:26 GMT
- Title: EAMDrift: An interpretable self retrain model for time series
- Authors: Gon\c{c}alo Mateus, Cl\'audia Soares, Jo\~ao Leit\~ao, Ant\'onio
Rodrigues
- Abstract summary: We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of machine learning for time series prediction has become
increasingly popular across various industries thanks to the availability of
time series data and advancements in machine learning algorithms. However,
traditional methods for time series forecasting rely on pre-optimized models
that are ill-equipped to handle unpredictable patterns in data. In this paper,
we present EAMDrift, a novel method that combines forecasts from multiple
individual predictors by weighting each prediction according to a performance
metric. EAMDrift is designed to automatically adapt to out-of-distribution
patterns in data and identify the most appropriate models to use at each moment
through interpretable mechanisms, which include an automatic retraining
process. Specifically, we encode different concepts with different models, each
functioning as an observer of specific behaviors. The activation of the overall
model then identifies which subset of the concept observers is identifying
concepts in the data. This activation is interpretable and based on learned
rules, allowing to study of input variables relations. Our study on real-world
datasets shows that EAMDrift outperforms individual baseline models by 20% and
achieves comparable accuracy results to non-interpretable ensemble models.
These findings demonstrate the efficacy of EAMDrift for time-series prediction
and highlight the importance of interpretability in machine learning models.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Recency-Weighted Temporally-Segmented Ensemble for Time-Series Modeling [0.0]
Time-series modeling in process industries faces the challenge of dealing with complex, multi-faceted, and evolving data characteristics.
We introduce the Recency-Weighted Temporally-Segmented (ReWTS) ensemble model, a novel chunk-based approach for multi-step forecasting.
We present a comparative analysis, utilizing two years of data from a wastewater treatment plant and a drinking water treatment plant in Norway.
arXiv Detail & Related papers (2024-03-04T16:00:35Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Machine Learning Algorithms for Time Series Analysis and Forecasting [0.0]
Time series data is being used everywhere, from sales records to patients' health evolution metrics.
Various statistical and deep learning models have been considered, notably, ARIMA, Prophet and LSTMs.
Our work can be used by anyone to develop a good understanding of the forecasting process, and to identify various state of the art models which are being used today.
arXiv Detail & Related papers (2022-11-25T22:12:03Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - The Effectiveness of Discretization in Forecasting: An Empirical Study
on Neural Time Series Models [15.281725756608981]
We investigate the effect of data input and output transformations on the predictive performance of neural forecasting architectures.
We find that binning almost always improves performance compared to using normalized real-valued inputs.
arXiv Detail & Related papers (2020-05-20T15:09:28Z) - For2For: Learning to forecast from forecasts [1.6752182911522522]
This paper presents a time series forecasting framework which combines standard forecasting methods and a machine learning model.
Tested on the M4 competition dataset, this approach outperforms all submissions for quarterly series, and is more accurate than all but the winning algorithm for monthly series.
arXiv Detail & Related papers (2020-01-14T03:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.