Explainable Artificial Intelligence for Economic Time Series: A Comprehensive Review and a Systematic Taxonomy of Methods and Concepts
- URL: http://arxiv.org/abs/2512.12506v1
- Date: Sun, 14 Dec 2025 00:45:30 GMT
- Title: Explainable Artificial Intelligence for Economic Time Series: A Comprehensive Review and a Systematic Taxonomy of Methods and Concepts
- Authors: Agustín García-García, Pablo Hidalgo, Julio E. Sandubete,
- Abstract summary: This survey reviews and organizes the growing literature on XAI for economic time series.<n>We propose a taxonomy that classifies methods by (i) explanation mechanism.<n>We synthesize time-series-specific adaptations that reduce lag fragmentation and computational cost.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable Artificial Intelligence (XAI) is increasingly required in computational economics, where machine-learning forecasters can outperform classical econometric models but remain difficult to audit and use for policy. This survey reviews and organizes the growing literature on XAI for economic time series, where autocorrelation, non-stationarity, seasonality, mixed frequencies, and regime shifts can make standard explanation techniques unreliable or economically implausible. We propose a taxonomy that classifies methods by (i) explanation mechanism: propagation-based approaches (e.g., Integrated Gradients, Layer-wise Relevance Propagation), perturbation and game-theoretic attribution (e.g., permutation importance, LIME, SHAP), and function-based global tools (e.g., Accumulated Local Effects); (ii) time-series compatibility, including preservation of temporal dependence, stability over time, and respect for data-generating constraints. We synthesize time-series-specific adaptations such as vector- and window-based formulations (e.g., Vector SHAP, WindowSHAP) that reduce lag fragmentation and computational cost while improving interpretability. We also connect explainability to causal inference and policy analysis through interventional attributions (Causal Shapley values) and constrained counterfactual reasoning. Finally, we discuss intrinsically interpretable architectures (notably attention-based transformers) and provide guidance for decision-grade applications such as nowcasting, stress testing, and regime monitoring, emphasizing attribution uncertainty and explanation dynamics as indicators of structural change.
Related papers
- Towards Worst-Case Guarantees with Scale-Aware Interpretability [58.519943565092724]
Neural networks organize information according to the hierarchical, multi-scale structure of natural data.<n>We propose a unifying research agenda -- emphscale-aware interpretability -- to develop formal machinery and interpretability tools.
arXiv Detail & Related papers (2026-02-05T01:22:31Z) - Interpretable Hybrid Deep Q-Learning Framework for IoT-Based Food Spoilage Prediction with Synthetic Data Generation and Hardware Validation [0.5417521241272645]
The need for an intelligent, real-time spoilage prediction system has become critical in modern IoT-driven food supply chains.<n>We propose a hybrid reinforcement learning framework integrating Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN) for enhanced spoilage prediction.
arXiv Detail & Related papers (2025-12-22T12:59:48Z) - Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations [0.0]
We develop adaptive explanation frameworks that recalibrate interpretability and fairness in dynamically evolving credit models.<n>Results show that adaptive methods, particularly rebaselined and surrogate-based explanations, substantially improve temporal stability and reduce disparate impact across demographic groups without degrading predictive accuracy.<n>These findings establish adaptive explainability as a practical mechanism for sustaining transparency, accountability, and ethical reliability in data-driven credit systems.
arXiv Detail & Related papers (2025-11-05T19:14:43Z) - From Physics to Machine Learning and Back: Part II - Learning and Observational Bias in PHM [52.64097278841485]
Review examines how incorporating learning and observational biases through physics-informed modeling and data strategies can guide models toward physically consistent and reliable predictions.<n>Fast adaptation methods including meta-learning and few-shot learning are reviewed alongside domain generalization techniques.
arXiv Detail & Related papers (2025-09-25T14:15:43Z) - A Survey of Reasoning and Agentic Systems in Time Series with Large Language Models [22.683448537572897]
Time series reasoning treats time as a first-class axis and incorporates intermediate evidence directly into the answer.<n>This survey defines the problem and organizes the literature by reasoning topology with three families: direct reasoning in one step, linear chain reasoning with explicit intermediates, and branch-structured reasoning.
arXiv Detail & Related papers (2025-09-15T04:39:50Z) - Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains [50.66049136093248]
We develop a time-aware structural causal model (SCM) that incorporates dynamic causal factors and the causal mechanism drifts.<n>We show that our method can yield the optimal causal predictor for each time domain.<n>Results on both synthetic and real-world datasets exhibit that SYNC can achieve superior temporal generalization performance.
arXiv Detail & Related papers (2025-06-21T14:05:37Z) - Topology-Aware Conformal Prediction for Stream Networks [68.02503121089633]
We propose Spatio-Temporal Adaptive Conformal Inference (textttCISTA), a novel framework that integrates network topology and temporal dynamics into the conformal prediction framework.<n>Our results show that textttCISTA effectively balances prediction efficiency and coverage, outperforming existing conformal prediction methods for stream networks.
arXiv Detail & Related papers (2025-03-06T21:21:15Z) - Time-varying Factor Augmented Vector Autoregression with Grouped Sparse Autoencoder [4.769637827387851]
We introduce a Grouped Sparse autoencoder that employs the Spike-and-Slab Lasso prior.<n>We incorporate time-varying parameters into the VAR component to better capture evolving economic dynamics.<n>Our empirical application to the US economy demonstrates that the Grouped Sparse autoencoder produces more interpretable factors.
arXiv Detail & Related papers (2025-03-06T12:37:55Z) - Coarse Set Theory for AI Ethics and Decision-Making: A Mathematical Framework for Granular Evaluations [0.0]
Coarse Ethics (CE) is a theoretical framework that justifies coarse-grained evaluations, such as letter grades or warning labels, as ethically appropriate under cognitive and contextual constraints.<n>This paper introduces Coarse Set Theory (CST), a novel mathematical framework that models coarse-grained decision-making using totally ordered structures and coarse partitions.<n>CST defines hierarchical relations among sets and uses information-theoretic tools, such as Kullback-Leibler Divergence, to quantify the trade-off between simplification and information loss.
arXiv Detail & Related papers (2025-02-11T08:18:37Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.