Explainable AI for Time Series via Virtual Inspection Layers
- URL: http://arxiv.org/abs/2303.06365v1
- Date: Sat, 11 Mar 2023 10:20:47 GMT
- Title: Explainable AI for Time Series via Virtual Inspection Layers
- Authors: Johanna Vielhaben, Sebastian Lapuschkin, Gr\'egoire Montavon, Wojciech
Samek
- Abstract summary: In this work, we put forward a virtual inspection layer, that transforms the time series to an interpretable representation.
In this way, we extend the applicability of a family of XAI methods to domains (e.g. speech) where the input is only interpretable after a transformation.
We demonstrate the usefulness of DFT-LRP in various time series classification settings like audio and electronic health records.
- Score: 11.879170124003252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of eXplainable Artificial Intelligence (XAI) has greatly advanced
in recent years, but progress has mainly been made in computer vision and
natural language processing. For time series, where the input is often not
interpretable, only limited research on XAI is available. In this work, we put
forward a virtual inspection layer, that transforms the time series to an
interpretable representation and allows to propagate relevance attributions to
this representation via local XAI methods like layer-wise relevance propagation
(LRP). In this way, we extend the applicability of a family of XAI methods to
domains (e.g. speech) where the input is only interpretable after a
transformation. Here, we focus on the Fourier transformation which is
prominently applied in the interpretation of time series and LRP and refer to
our method as DFT-LRP. We demonstrate the usefulness of DFT-LRP in various time
series classification settings like audio and electronic health records. We
showcase how DFT-LRP reveals differences in the classification strategies of
models trained in different domains (e.g., time vs. frequency domain) or helps
to discover how models act on spurious correlations in the data.
Related papers
- Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - On the Necessity of Multi-Domain Explanation: An Uncertainty Principle Approach for Deep Time Series Models [7.666215650141892]
A prevailing approach to explain time series models is to generate attribution in time domain.<n>In this paper, we demonstrate that in certain cases, XAI methods can generate attributions that highlight fundamentally different features in the time and frequency domains.<n>This suggests that both domains' attributions should be presented to achieve a more comprehensive interpretation.
arXiv Detail & Related papers (2025-06-03T18:00:28Z) - When can isotropy help adapt LLMs' next word prediction to numerical domains? [53.98633183204453]
It is shown that the isotropic property of LLM embeddings in contextual embedding space preserves the underlying structure of representations.<n> Experiments show that different characteristics of numerical data and model architectures have different impacts on isotropy.
arXiv Detail & Related papers (2025-05-22T05:10:34Z) - STAA: Spatio-Temporal Attention Attribution for Real-Time Interpreting Transformer-based Video Models [7.500941533148728]
Transformer-based models have achieved state-of-the-art performance in various computer vision tasks, including image and video analysis.
Current Explainable AI (XAI) methods can only provide one-dimensional feature importance, either spatial or temporal explanation.
This paper introduces STAA (Spatio-Temporal Attention Attribution), an XAI method for interpreting video Transformer models.
arXiv Detail & Related papers (2024-11-01T14:40:07Z) - Towards Generalisable Time Series Understanding Across Domains [10.350643783811174]
We introduce OTiS, an open model for general time series analysis.
We propose a novel pre-training paradigm including a tokeniser with learnable domain-specific signatures.
Our model is pre-trained on a large corpus of 640,187 samples and 11 billion time points spanning 8 distinct domains.
arXiv Detail & Related papers (2024-10-09T17:09:30Z) - DRFormer: Multi-Scale Transformer Utilizing Diverse Receptive Fields for Long Time-Series Forecasting [3.420673126033772]
We propose a dynamic tokenizer with a dynamic sparse learning algorithm to capture diverse receptive fields and sparse patterns of time series data.
Our proposed model, named DRFormer, is evaluated on various real-world datasets, and experimental results demonstrate its superiority compared to existing methods.
arXiv Detail & Related papers (2024-08-05T07:26:47Z) - State Sequences Prediction via Fourier Transform for Representation
Learning [111.82376793413746]
We propose State Sequences Prediction via Fourier Transform (SPF), a novel method for learning expressive representations efficiently.
We theoretically analyze the existence of structural information in state sequences, which is closely related to policy performance and signal regularity.
Experiments demonstrate that the proposed method outperforms several state-of-the-art algorithms in terms of both sample efficiency and performance.
arXiv Detail & Related papers (2023-10-24T14:47:02Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Temporal Lift Pooling for Continuous Sign Language Recognition [6.428695655854854]
We derive temporal lift pooling (TLP) from the Lifting Scheme in signal processing to intelligently downsample features of different temporal hierarchies.
Our TLP is a three-stage procedure, which performs signal decomposition, component weighting and information fusion to generate a refined downsized feature map.
Experiments on two large-scale datasets show TLP outperforms hand-crafted methods and specialized spatial variants by a large margin (1.5%) with similar computational overhead.
arXiv Detail & Related papers (2022-07-18T16:28:00Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - PSL is Dead. Long Live PSL [3.280253526254703]
Property Specification Language (PSL) is a form of temporal logic that has been mainly used in discrete domains.
We show that by merging machine learning techniques with PSL monitors, we can extend PSL to work on continuous domains.
arXiv Detail & Related papers (2022-05-27T17:55:54Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - Reprogramming Language Models for Molecular Representation Learning [65.00999660425731]
We propose Representation Reprogramming via Dictionary Learning (R2DL) for adversarially reprogramming pretrained language models for molecular learning tasks.
The adversarial program learns a linear transformation between a dense source model input space (language data) and a sparse target model input space (e.g., chemical and biological molecule data) using a k-SVD solver.
R2DL achieves the baseline established by state of the art toxicity prediction models trained on domain-specific data and outperforms the baseline in a limited training-data setting.
arXiv Detail & Related papers (2020-12-07T05:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.