Interpreting Time Series Forecasts with LIME and SHAP: A Case Study on the Air Passengers Dataset
- URL: http://arxiv.org/abs/2508.12253v1
- Date: Sun, 17 Aug 2025 06:22:29 GMT
- Title: Interpreting Time Series Forecasts with LIME and SHAP: A Case Study on the Air Passengers Dataset
- Authors: Manish Shukla,
- Abstract summary: Time-series forecasting underpins critical decisions across aviation, energy, retail and health.<n>This paper presents a unified framework for interpreting time-series forecasts using local interpretable model-agnostic explanations.<n>We show that a small set of lagged features -- particularly the twelve-month lag -- and seasonal encodings explain most forecast variance.
- Score: 1.9943074894669668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time-series forecasting underpins critical decisions across aviation, energy, retail and health. Classical autoregressive integrated moving average (ARIMA) models offer interpretability via coefficients but struggle with nonlinearities, whereas tree-based machine-learning models such as XGBoost deliver high accuracy but are often opaque. This paper presents a unified framework for interpreting time-series forecasts using local interpretable model-agnostic explanations (LIME) and SHapley additive exPlanations (SHAP). We convert a univariate series into a leakage-free supervised learning problem, train a gradient-boosted tree alongside an ARIMA baseline and apply post-hoc explainability. Using the Air Passengers dataset as a case study, we show that a small set of lagged features -- particularly the twelve-month lag -- and seasonal encodings explain most forecast variance. We contribute: (i) a methodology for applying LIME and SHAP to time series without violating chronology; (ii) theoretical exposition of the underlying algorithms; (iii) empirical evaluation with extensive analysis; and (iv) guidelines for practitioners.
Related papers
- Towards Accurate and Interpretable Time-series Forecasting: A Polynomial Learning Approach [14.252057458281909]
Time series forecasting enables early warning and has driven asset performance management from traditional planned maintenance to predictive maintenance.<n>The lack of interpretability in forecasting methods undermines users' trust and complicates for developers.<n>This paper proposes the interpretable learning (IPL) method, which integrates interpretability into the model structure.
arXiv Detail & Related papers (2026-03-03T11:59:46Z) - DistDF: Time-Series Forecasting Needs Joint-Distribution Wasserstein Alignment [92.70019102733453]
Training time-series forecast models requires aligning the conditional distribution of model forecasts with that of the label sequence.<n>We propose DistDF, which achieves alignment by alternatively minimizing a discrepancy between the conditional forecast and label distributions.
arXiv Detail & Related papers (2025-10-28T16:09:59Z) - A Unified Frequency Domain Decomposition Framework for Interpretable and Robust Time Series Forecasting [81.73338008264115]
Current approaches for time series forecasting, whether in the time or frequency domain, predominantly use deep learning models based on linear layers or transformers.<n>We propose FIRE, a unified frequency domain decomposition framework that provides a mathematical abstraction for diverse types of time series.<n>Fire consistently outperforms state-of-the-art models on long-term forecasting benchmarks.
arXiv Detail & Related papers (2025-10-11T09:59:25Z) - PAX-TS: Model-agnostic multi-granular explanations for time series forecasting via localized perturbations [1.1247041770660733]
PAX-TS is a model-agnostic post-hoc algorithm to explain time series forecasting models and their forecasts.<n>Our method is based on localized input perturbations and results in multi-granular explanations.<n>We identify 6 classes of patterns that repeatedly occur across different datasets and algorithms.
arXiv Detail & Related papers (2025-08-26T12:31:53Z) - Bridging the Last Mile of Prediction: Enhancing Time Series Forecasting with Conditional Guided Flow Matching [9.465542901469815]
Conditional Guided Flow Matching (CGFM) is a model-agnostic framework that extends flow matching by integrating outputs from an auxiliary predictive model.<n>CGFM incorporates historical data as both conditions and guidance, uses two-sided conditional paths, and employs affine paths to expand the path space.<n> Experiments across datasets and baselines show CGFM consistently outperforms state-of-the-art models, advancing forecasting.
arXiv Detail & Related papers (2025-07-09T18:03:31Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Counterfactual Explanations for Time Series Forecasting [14.03870816983583]
We formulate the novel problem of counterfactual generation for time series forecasting, and propose an algorithm, called ForecastCF.
ForecastCF solves the problem by applying gradient-based perturbations to the original time series.
Our results show that ForecastCF outperforms the baseline in terms of counterfactual validity and data manifold closeness.
arXiv Detail & Related papers (2023-10-12T08:51:59Z) - Mixed moving average field guided learning for spatio-temporal data [0.0]
We define a novel Bayesian-temporal embedding and a theory-guided machine learning approach to make ensemble forecasts.
We use Lipschitz predictors to determine fixed-time and any-time PAC in the batch learning setting.
We then test the performance of our learning methodology by using linear predictors and data sets simulated from a dependence- Ornstein-Uhlenbeck process.
arXiv Detail & Related papers (2023-01-02T16:11:05Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Simultaneously Reconciled Quantile Forecasting of Hierarchically Related
Time Series [11.004159006784977]
We propose a flexible nonlinear model that optimize quantile regression loss coupled with suitable regularization terms to maintain consistency of forecasts across hierarchies.
The theoretical framework introduced herein can be applied to any forecasting model with an underlying differentiable loss function.
arXiv Detail & Related papers (2021-02-25T00:59:01Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.