Surrogate Modeling for Explainable Predictive Time Series Corrections
- URL: http://arxiv.org/abs/2412.19897v2
- Date: Wed, 15 Jan 2025 19:51:44 GMT
- Title: Surrogate Modeling for Explainable Predictive Time Series Corrections
- Authors: Alfredo Lopez, Florian Sobieczky,
- Abstract summary: An initially non-interpretable predictive model is used to improve the forecast of a classical time-series 'base model'
'Explainability' of the correction is provided by fitting the base model again to the data from which the error prediction is removed (subtracted)
We provide illustrative examples to demonstrate the potential of the method to discover and explain underlying patterns in the data.
- Score: 0.0
- License:
- Abstract: We introduce a local surrogate approach for explainable time-series forecasting. An initially non-interpretable predictive model to improve the forecast of a classical time-series 'base model' is used. 'Explainability' of the correction is provided by fitting the base model again to the data from which the error prediction is removed (subtracted), yielding a difference in the model parameters which can be interpreted. We provide illustrative examples to demonstrate the potential of the method to discover and explain underlying patterns in the data.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
We develop an influence functions framework to address these challenges.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Deep Non-Parametric Time Series Forecaster [19.800783133682955]
The proposed approach does not assume any parametric form for the predictive distribution and instead generates predictions by sampling from the empirical distribution according to a tunable strategy.
We develop a global version of the proposed method that automatically learns the sampling strategy by exploiting the information across multiple related time series.
arXiv Detail & Related papers (2023-12-22T12:46:30Z) - Prediction Errors for Penalized Regressions based on Generalized
Approximate Message Passing [0.0]
We derive the forms of estimators for the prediction errors: $C_p$ criterion, information criteria, and leave-one-out cross validation (LOOCV) error.
In the framework of GAMP, we show that the information criteria can be expressed by using the variance of the estimates.
arXiv Detail & Related papers (2022-06-26T09:42:39Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - EDDA: Explanation-driven Data Augmentation to Improve Model and
Explanation Alignment [12.729179495550557]
We seek a methodology that can improve alignment between model predictions and explanation method.
We achieve this through a novel explanation-driven data augmentation (EDDA) method.
This is based on the simple motivating principle that occluding salient regions for the model prediction should decrease the model confidence in the prediction.
arXiv Detail & Related papers (2021-05-29T00:42:42Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Explainable boosted linear regression for time series forecasting [0.1876920697241348]
Time series forecasting involves collecting and analyzing past observations to develop a model to extrapolate such observations into the future.
We propose explainable boosted linear regression (EBLR) algorithm for time series forecasting.
arXiv Detail & Related papers (2020-09-18T22:31:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.