Feature Importance Explanations for Temporal Black-Box Models
- URL: http://arxiv.org/abs/2102.11934v1
- Date: Tue, 23 Feb 2021 20:41:07 GMT
- Title: Feature Importance Explanations for Temporal Black-Box Models
- Authors: Akshay Sood and Mark Craven
- Abstract summary: We propose TIME, a method to explain models that are inherently temporal in nature.
Our approach uses a model-agnostic permutation-based approach to analyze global feature importance.
- Score: 3.655021726150369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models in the supervised learning framework may capture rich and complex
representations over the features that are hard for humans to interpret.
Existing methods to explain such models are often specific to architectures and
data where the features do not have a time-varying component. In this work, we
propose TIME, a method to explain models that are inherently temporal in
nature. Our approach (i) uses a model-agnostic permutation-based approach to
analyze global feature importance, (ii) identifies the importance of salient
features with respect to their temporal ordering as well as localized windows
of influence, and (iii) uses hypothesis testing to provide statistical rigor.
Related papers
- SynTSBench: Rethinking Temporal Pattern Learning in Deep Learning Models for Time Series [11.314952720053464]
We propose a synthetic data-driven evaluation paradigm, SynTSBench, for time series forecasting models.<n>Our framework isolates confounding factors and establishes an interpretable evaluation system with three core analytical dimensions.<n>Our experiments show that current deep learning models do not universally approach optimal baselines across all types of temporal features.
arXiv Detail & Related papers (2025-10-23T06:59:38Z) - Understanding the Implicit Biases of Design Choices for Time Series Foundation Models [90.894232610821]
Time series foundation models (TSFMs) are a class of potentially powerful, general-purpose tools for time series forecasting and related temporal tasks.<n>Their behavior is strongly shaped by subtle inductive biases in their design.<n>We show how these biases can be intuitive or very counterintuitive, depending on properties of the model and data.
arXiv Detail & Related papers (2025-10-22T04:42:35Z) - ProtoTS: Learning Hierarchical Prototypes for Explainable Time Series Forecasting [25.219624871510376]
We propose ProtoTS, a novel interpretable forecasting framework that achieves both high accuracy and transparent decision-making.<n>ProtoTS computes instance-prototype similarity based on a denoised representation that preserves abundant heterogeneous information.<n> Experiments on multiple realistic benchmarks, including a newly released LOF dataset, show that ProtoTS not only exceeds existing methods in forecast accuracy but also delivers expert-steerable interpretations.
arXiv Detail & Related papers (2025-09-27T07:10:21Z) - On Identifying Why and When Foundation Models Perform Well on Time-Series Forecasting Using Automated Explanations and Rating [7.375605655806626]
Time-series forecasting models (TSFM) have evolved from classical statistical methods to sophisticated foundation models.<n>This work addresses concerns by combining traditional explainable AI (XAI) methods with Rating Driven Explanations (RDE)<n>We evaluate four distinct model architectures: ARIMA, Gradient Boosting, Chronos (time-series specific foundation model), Llama (general-purpose; both fine-tuned and base models)
arXiv Detail & Related papers (2025-08-28T05:27:45Z) - Tailored Architectures for Time Series Forecasting: Evaluating Deep Learning Models on Gaussian Process-Generated Data [0.5573267589690007]
Research aims at uncovering clear connections between time series characteristics and particular models.<n>We present TimeFlex, a new model that incorporates a modular architecture tailored to handle diverse temporal dynamics.<n>This model is compared to current state-of-the-art models, offering a deeper understanding of how models perform under varied time series conditions.
arXiv Detail & Related papers (2025-06-10T16:46:02Z) - Dynamic Modes as Time Representation for Spatiotemporal Forecasting [19.551966701918236]
The proposed approach employs Dynamic Modecomposition (DMD) to extract temporal modes directly from observed data.<n>Experiments on urban mobility, highway traffic, and climate show that the DMD-based embedding consistently improves long-horizon forecasting accuracy, reduces residual correlation, and enhances temporal generalization.
arXiv Detail & Related papers (2025-06-01T23:16:39Z) - Foundation Models for Time Series: A Survey [0.27835153780240135]
Transformer-based foundation models have emerged as a dominant paradigm in time series analysis.
This survey introduces a novel taxonomy to categorize them across several dimensions.
arXiv Detail & Related papers (2025-04-05T01:27:55Z) - On the importance of structural identifiability for machine learning with partially observed dynamical systems [0.7864304771129751]
We use structural identifiability analysis to explicitly relate parameter configurations that are associated with identical system outputs.
Our results demonstrate the importance of accounting for structural identifiability, a topic that has received relatively little attention from the machine learning community.
arXiv Detail & Related papers (2025-02-06T15:06:52Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - TimeTuner: Diagnosing Time Representations for Time-Series Forecasting
with Counterfactual Explanations [3.8357850372472915]
This paper contributes a novel visual analytics framework, namely TimeTuner, to help analysts understand how model behaviors are associated with localized, stationarity, and correlations of time-series representations.
We show that TimeTuner can help characterize time-series representations and guide the feature engineering processes.
arXiv Detail & Related papers (2023-07-19T11:40:15Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Encoding Time-Series Explanations through Self-Supervised Model Behavior
Consistency [26.99599329431296]
We present TimeX, a time series consistency model for training explainers.
TimeX trains an interpretable surrogate to mimic the behavior of a pretrained time series model.
We evaluate TimeX on eight synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods.
arXiv Detail & Related papers (2023-06-03T13:25:26Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Neural Superstatistics for Bayesian Estimation of Dynamic Cognitive
Models [2.7391842773173334]
We develop a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters.
Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model.
arXiv Detail & Related papers (2022-11-23T17:42:53Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - This looks more like that: Enhancing Self-Explaining Models by
Prototypical Relevance Propagation [17.485732906337507]
We present a case study of the self-explaining network, ProtoPNet, in the presence of a spectrum of artifacts.
We introduce a novel method for generating more precise model-aware explanations.
In order to obtain a clean dataset, we propose to use multi-view clustering strategies for segregating the artifact images.
arXiv Detail & Related papers (2021-08-27T09:55:53Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.