Generating Sparse Counterfactual Explanations For Multivariate Time
Series
- URL: http://arxiv.org/abs/2206.00931v1
- Date: Thu, 2 Jun 2022 08:47:06 GMT
- Title: Generating Sparse Counterfactual Explanations For Multivariate Time
Series
- Authors: Jana Lang, Martin Giese, Winfried Ilg and Sebastian Otte
- Abstract summary: We propose a generative adversarial network (GAN) architecture that generates SPARse Counterfactual Explanations for multivariate time series.
Our approach provides a custom sparsity layer and regularizes the counterfactual loss function in terms of similarity, sparsity, and smoothness of trajectories.
We evaluate our approach on real-world human motion datasets as well as a synthetic time series interpretability benchmark.
- Score: 0.5161531917413706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since neural networks play an increasingly important role in critical
sectors, explaining network predictions has become a key research topic.
Counterfactual explanations can help to understand why classifier models decide
for particular class assignments and, moreover, how the respective input
samples would have to be modified such that the class prediction changes.
Previous approaches mainly focus on image and tabular data. In this work we
propose SPARCE, a generative adversarial network (GAN) architecture that
generates SPARse Counterfactual Explanations for multivariate time series. Our
approach provides a custom sparsity layer and regularizes the counterfactual
loss function in terms of similarity, sparsity, and smoothness of trajectories.
We evaluate our approach on real-world human motion datasets as well as a
synthetic time series interpretability benchmark. Although we make
significantly sparser modifications than other approaches, we achieve
comparable or better performance on all metrics. Moreover, we demonstrate that
our approach predominantly modifies salient time steps and features, leaving
non-salient inputs untouched.
Related papers
- Unsupervised Multi-modal Feature Alignment for Time Series
Representation Learning [20.655943795843037]
We introduce an innovative approach that focuses on aligning and binding time series representations encoded from different modalities.
In contrast to conventional methods that fuse features from multiple modalities, our proposed approach simplifies the neural architecture by retaining a single time series encoder.
Our approach outperforms existing state-of-the-art URL methods across diverse downstream tasks.
arXiv Detail & Related papers (2023-12-09T22:31:20Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Self-Supervised Learning via Maximum Entropy Coding [57.56570417545023]
We propose Maximum Entropy Coding (MEC) as a principled objective that explicitly optimize on the structure of the representation.
MEC learns a more generalizable representation than previous methods based on specific pretext tasks.
It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking.
arXiv Detail & Related papers (2022-10-20T17:58:30Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Explainable Multivariate Time Series Classification: A Deep Neural
Network Which Learns To Attend To Important Variables As Well As Informative
Time Intervals [32.30627405832656]
Time series data is prevalent in a wide variety of real-world applications.
Key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification.
We introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classification output.
arXiv Detail & Related papers (2020-11-23T19:16:46Z) - Benchmarking Deep Learning Interpretability in Time Series Predictions [41.13847656750174]
Saliency methods are used extensively to highlight the importance of input features in model predictions.
We set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures.
arXiv Detail & Related papers (2020-10-26T22:07:53Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.