Interpretability in Deep Time Series Models Demands Semantic Alignment
- URL: http://arxiv.org/abs/2602.02239v1
- Date: Mon, 02 Feb 2026 15:48:30 GMT
- Title: Interpretability in Deep Time Series Models Demands Semantic Alignment
- Authors: Giovanni De Felice, Riccardo D'Elia, Alberto Termine, Pietro Barbiero, Giuseppe Marra, Silvia Santini,
- Abstract summary: We state interpretability in deep time series models should pursue semantic alignment.<n>Once established, semantic alignment must be preserved under temporal evolution.<n>We outline a blueprint for semantically aligned deep time series models, identify properties that support trust, and discuss implications for model design.
- Score: 19.12673689717747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep time series models continue to improve predictive performance, yet their deployment remains limited by their black-box nature. In response, existing interpretability approaches in the field keep focusing on explaining the internal model computations, without addressing whether they align or not with how a human would reason about the studied phenomenon. Instead, we state interpretability in deep time series models should pursue semantic alignment: predictions should be expressed in terms of variables that are meaningful to the end user, mediated by spatial and temporal mechanisms that admit user-dependent constraints. In this paper, we formalize this requirement and require that, once established, semantic alignment must be preserved under temporal evolution: a constraint with no analog in static settings. Provided with this definition, we outline a blueprint for semantically aligned deep time series models, identify properties that support trust, and discuss implications for model design.
Related papers
- From Observations to States: Latent Time Series Forecasting [65.98504021691666]
We propose Latent Time Series Forecasting (LatentTSF), a novel paradigm that shifts TSF from observation regression to latent state prediction.<n>Specifically, LatentTSF employs an AutoEncoder to project observations at each time step into a higher-dimensional latent state space.<n>Our proposed latent objectives implicitly maximize mutual information between predicted latent states and ground-truth states and observations.
arXiv Detail & Related papers (2026-01-30T20:39:44Z) - Temporal Concept Dynamics in Diffusion Models via Prompt-Conditioned Interventions [70.87254264798341]
PCI is a training-free and model-agnostic framework for analyzing concept dynamics through diffusion time.<n>It reveals diverse temporal behaviors across diffusion models, in which certain phases of the trajectory are more favorable to specific concepts even within the same concept type.
arXiv Detail & Related papers (2025-12-09T11:05:08Z) - When, How Long and How Much? Interpretable Neural Networks for Time Series Regression by Learning to Mask and Aggregate [16.533105886716804]
Time series extrinsic regression (TSER) refers to the task of predicting a continuous target variable from an input time series.<n>New approach learns a compact set of human-understandable concepts without requiring any annotations.
arXiv Detail & Related papers (2025-12-03T09:01:41Z) - Priors in Time: Missing Inductive Biases for Language Model Interpretability [58.07412640266836]
We show that Sparse Autoencoders impose priors that assume independence of concepts across time, implying stationarity.<n>We introduce a new interpretability objective -- Temporal Feature Analysis -- which possesses a temporal inductive bias to decompose representations at a given time into two parts.<n>Our results underscore the need for inductive biases that match the data in designing robust interpretability tools.
arXiv Detail & Related papers (2025-11-03T18:43:48Z) - Dissociating model architectures from inference computations [0.6906005491572401]
We show how auto-regressive and deep temporal models differ in their treatment of non-Markovian sequence modelling.<n>We demonstrate that deep temporal computations are mimicked by autoregressive models by structuring context access during iterative inference.
arXiv Detail & Related papers (2025-07-21T16:30:42Z) - Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework [2.8470354623829577]
We develop a framework based on Concept Bottleneck Models to enforce interpretability of time series Transformers.
We modify the training objective to encourage a model to develop representations similar to predefined interpretable concepts.
We find that the model performance remains mostly unaffected, while the model shows much improved interpretability.
arXiv Detail & Related papers (2024-10-08T14:22:40Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Self-Interpretable Time Series Prediction with Counterfactual
Explanations [4.658166900129066]
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving.
Most existing methods focus on interpreting predictions by assigning important scores to segments of time series.
We develop a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions.
arXiv Detail & Related papers (2023-06-09T16:42:52Z) - Uncertainty in Real-Time Semantic Segmentation on Embedded Systems [23.45104074322328]
Application for semantic segmentation models in areas such as autonomous vehicles and human computer interaction require real-time predictive capabilities.<n>The challenges of addressing real-time application is amplified by the need to operate on resource constrained hardware.<n>This paper addresses this by combining deep feature extraction from pre-trained models with Bayesian regression and moment propagation for uncertainty aware predictions.
arXiv Detail & Related papers (2022-12-20T07:32:12Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.