Explaining Time Series Predictions with Dynamic Masks
- URL: http://arxiv.org/abs/2106.05303v1
- Date: Wed, 9 Jun 2021 18:01:09 GMT
- Title: Explaining Time Series Predictions with Dynamic Masks
- Authors: Jonathan Crabb\'e, Mihaela van der Schaar
- Abstract summary: We propose dynamic masks (Dynamask) to explain predictions of a machine learning model.
With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time.
The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance.
- Score: 91.3755431537592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we explain the predictions of a machine learning model? When the data
is structured as a multivariate time series, this question induces additional
difficulties such as the necessity for the explanation to embody the time
dependency and the large number of inputs. To address these challenges, we
propose dynamic masks (Dynamask). This method produces instance-wise importance
scores for each feature at each time step by fitting a perturbation mask to the
input sequence. In order to incorporate the time dependency of the data,
Dynamask studies the effects of dynamic perturbation operators. In order to
tackle the large number of inputs, we propose a scheme to make the feature
selection parsimonious (to select no more feature than necessary) and legible
(a notion that we detail by making a parallel with information theory). With
synthetic and real-world data, we demonstrate that the dynamic underpinning of
Dynamask, together with its parsimony, offer a neat improvement in the
identification of feature importance over time. The modularity of Dynamask
makes it ideal as a plug-in to increase the transparency of a wide range of
machine learning models in areas such as medicine and finance, where time
series are abundant.
Related papers
- EMIT- Event-Based Masked Auto Encoding for Irregular Time Series [9.903108445512576]
Irregular time series, where data points are recorded at uneven intervals, are prevalent in healthcare settings.
This variability, which reflects critical fluctuations in patient health, is essential for informed clinical decision-making.
Existing self-supervised learning research on irregular time series often relies on generic pretext tasks like forecasting.
This paper proposes a novel pretraining framework, EMIT, an event-based masking for irregular time series.
arXiv Detail & Related papers (2024-09-25T02:05:32Z) - ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling [82.69579113377192]
SimMTM is a simple pre-training framework for Masked Time-series Modeling.
SimMTM recovers masked time points by the weighted aggregation of multiple neighbors outside the manifold.
SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods.
arXiv Detail & Related papers (2023-02-02T04:12:29Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Masked Autoencoding for Scalable and Generalizable Decision Making [93.84855114717062]
MaskDP is a simple and scalable self-supervised pretraining method for reinforcement learning and behavioral cloning.
We find that a MaskDP model gains the capability of zero-shot transfer to new BC tasks, such as single and multiple goal reaching.
arXiv Detail & Related papers (2022-11-23T07:04:41Z) - Time Series Generation with Masked Autoencoder [0.0]
Masked autoencoders with interpolators (InterpoMAE) are scalable self-supervised generators for time series.
InterpoMAE uses an interpolator rather than mask tokens to restore the latent representations for missing patches in the latent space.
arXiv Detail & Related papers (2022-01-14T08:11:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.