On the Necessity of Multi-Domain Explanation: An Uncertainty Principle Approach for Deep Time Series Models
- URL: http://arxiv.org/abs/2506.03267v1
- Date: Tue, 03 Jun 2025 18:00:28 GMT
- Title: On the Necessity of Multi-Domain Explanation: An Uncertainty Principle Approach for Deep Time Series Models
- Authors: Shahbaz Rezaei, Avishai Halev, Xin Liu,
- Abstract summary: A prevailing approach to explain time series models is to generate attribution in time domain.<n>In this paper, we demonstrate that in certain cases, XAI methods can generate attributions that highlight fundamentally different features in the time and frequency domains.<n>This suggests that both domains' attributions should be presented to achieve a more comprehensive interpretation.
- Score: 7.666215650141892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A prevailing approach to explain time series models is to generate attribution in time domain. A recent development in time series XAI is the concept of explanation spaces, where any model trained in the time domain can be interpreted with any existing XAI method in alternative domains, such as frequency. The prevailing approach is to present XAI attributions either in the time domain or in the domain where the attribution is most sparse. In this paper, we demonstrate that in certain cases, XAI methods can generate attributions that highlight fundamentally different features in the time and frequency domains that are not direct counterparts of one another. This suggests that both domains' attributions should be presented to achieve a more comprehensive interpretation. Thus it shows the necessity of multi-domain explanation. To quantify when such cases arise, we introduce the uncertainty principle (UP), originally developed in quantum mechanics and later studied in harmonic analysis and signal processing, to the XAI literature. This principle establishes a lower bound on how much a signal can be simultaneously localized in both the time and frequency domains. By leveraging this concept, we assess whether attributions in the time and frequency domains violate this bound, indicating that they emphasize distinct features. In other words, UP provides a sufficient condition that the time and frequency domain explanations do not match and, hence, should be both presented to the end user. We validate the effectiveness of this approach across various deep learning models, XAI methods, and a wide range of classification and forecasting datasets. The frequent occurrence of UP violations across various datasets and XAI methods highlights the limitations of existing approaches that focus solely on time-domain explanations. This underscores the need for multi-domain explanations as a new paradigm.
Related papers
- Cross-Domain Conditional Diffusion Models for Time Series Imputation [34.400748070956006]
Cross-domain time series imputation is an underexplored data-centric research task.<n>Existing approaches primarily focus on the single-domain setting.<n>Our proposed solution integrates shared spectral components from both domains while retaining domain-specific temporal structures.
arXiv Detail & Related papers (2025-06-14T09:09:07Z) - Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - Time series saliency maps: explaining models across multiple domains [3.9018723423306003]
We introduce Cross-domain Integrated Gradients, a generalization of Integrated Gradients.<n>Our method enables feature attributions on any domain that can be formulated as an invertible, differentiable transformation of the time domain.<n>These results demonstrate the ability of cross-domain integrated gradients to provide semantically meaningful insights in time-series models.
arXiv Detail & Related papers (2025-05-19T13:31:35Z) - General Time-series Model for Universal Knowledge Representation of Multivariate Time-Series data [61.163542597764796]
We show that time series with different time granularities (or corresponding frequency resolutions) exhibit distinct joint distributions in the frequency domain.<n>A novel Fourier knowledge attention mechanism is proposed to enable learning time-aware representations from both the temporal and frequency domains.<n>An autoregressive blank infilling pre-training framework is incorporated to time series analysis for the first time, leading to a generative tasks agnostic pre-training strategy.
arXiv Detail & Related papers (2025-02-05T15:20:04Z) - Learning Latent Spaces for Domain Generalization in Time Series Forecasting [60.29403194508811]
Time series forecasting is vital in many real-world applications, yet developing models that generalize well on unseen relevant domains remains underexplored.<n>We propose a framework for domain generalization in time series forecasting by mining the latent factors that govern temporal dependencies across domains.<n>Our approach uses a decomposition-based architecture with a new Conditional $beta$-Variational Autoencoder (VAE), wherein time series data is first decomposed into trend-cyclical and seasonal components.
arXiv Detail & Related papers (2024-12-15T12:41:53Z) - Towards Generalisable Time Series Understanding Across Domains [10.350643783811174]
We introduce a novel pre-training paradigm specifically designed to handle time series heterogeneity.<n>We propose a tokeniser with learnable domain signatures, a dual masking strategy, and a normalised cross-correlation loss.<n>Our code and pre-trained weights are available at https://www.oetu.com/oetu/otis.
arXiv Detail & Related papers (2024-10-09T17:09:30Z) - Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models [12.575427166236844]
We present Spectral eXplanation (SpectralX), an XAI framework that provides time-frequency explanations for time-series black-box classifiers.
We also introduce Feature Importance Approximations (FIA), a new perturbation-based XAI method.
arXiv Detail & Related papers (2024-08-07T08:51:10Z) - FreqRISE: Explaining time series using frequency masking [10.076947813982876]
Time-series data are fundamentally important for many critical domains.<n>Current methods for obtaining saliency maps assume localized information in the raw input space.<n>We propose FreqRISE, which uses masking-based methods to produce explanations in the frequency and time-frequency domain.
arXiv Detail & Related papers (2024-06-19T14:19:59Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Context-aware Domain Adaptation for Time Series Anomaly Detection [69.3488037353497]
Time series anomaly detection is a challenging task with a wide range of real-world applications.
Recent efforts have been devoted to time series domain adaptation to leverage knowledge from similar domains.
We propose a framework that combines context sampling and anomaly detection into a joint learning procedure.
arXiv Detail & Related papers (2023-04-15T02:28:58Z) - Frequency Decomposition to Tap the Potential of Single Domain for
Generalization [10.555462823983122]
Domain generalization is a must-have characteristic of general artificial intelligence.
In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples.
A new method that learns through multiple domains is proposed.
arXiv Detail & Related papers (2023-04-14T17:15:47Z) - Explainable AI for Time Series via Virtual Inspection Layers [11.879170124003252]
In this work, we put forward a virtual inspection layer, that transforms the time series to an interpretable representation.
In this way, we extend the applicability of a family of XAI methods to domains (e.g. speech) where the input is only interpretable after a transformation.
We demonstrate the usefulness of DFT-LRP in various time series classification settings like audio and electronic health records.
arXiv Detail & Related papers (2023-03-11T10:20:47Z) - Time Series Analysis via Network Science: Concepts and Algorithms [62.997667081978825]
This review provides a comprehensive overview of existing mapping methods for transforming time series into networks.
We describe the main conceptual approaches, provide authoritative references and give insight into their advantages and limitations in a unified notation and language.
Although still very recent, this research area has much potential and with this survey we intend to pave the way for future research on the topic.
arXiv Detail & Related papers (2021-10-11T13:33:18Z) - timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers [3.6433472230928428]
We present novel domain mappings for the time domain, frequency domain, and time series statistics.
We analyze their explicative power as well as their limits.
We employ a novel evaluation metric to experimentally compare timeXplain to several model-specific explanation approaches.
arXiv Detail & Related papers (2020-07-15T10:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.