Class-Specific Explainability for Deep Time Series Classifiers
- URL: http://arxiv.org/abs/2210.05411v1
- Date: Tue, 11 Oct 2022 12:37:15 GMT
- Title: Class-Specific Explainability for Deep Time Series Classifiers
- Authors: Ramesh Doddaiah, Prathyush Parvatharaju, Elke Rundensteiner, Thomas
Hartvigsen
- Abstract summary: We study the open problem of class-specific explainability for deep time series classifiers.
We design a novel explainability method, DEMUX, which learns saliency maps for explaining deep multi-class time series classifiers.
Our experimental study demonstrates that DEMUX outperforms nine state-of-the-art alternatives on five popular datasets.
- Score: 6.566615606042994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability helps users trust deep learning solutions for time series
classification. However, existing explainability methods for multi-class time
series classifiers focus on one class at a time, ignoring relationships between
the classes. Instead, when a classifier is choosing between many classes, an
effective explanation must show what sets the chosen class apart from the rest.
We now formalize this notion, studying the open problem of class-specific
explainability for deep time series classifiers, a challenging and impactful
problem setting. We design a novel explainability method, DEMUX, which learns
saliency maps for explaining deep multi-class time series classifiers by
adaptively ensuring that its explanation spotlights the regions in an input
time series that a model uses specifically to its predicted class. DEMUX adopts
a gradient-based approach composed of three interdependent modules that combine
to generate consistent, class-specific saliency maps that remain faithful to
the classifier's behavior yet are easily understood by end users. Our
experimental study demonstrates that DEMUX outperforms nine state-of-the-art
alternatives on five popular datasets when explaining two types of deep time
series classifiers. Further, through a case study, we demonstrate that DEMUX's
explanations indeed highlight what separates the predicted class from the
others in the eyes of the classifier. Our code is publicly available at
https://github.com/rameshdoddaiah/DEMUX.
Related papers
- Generative Multi-modal Models are Good Class-Incremental Learners [51.5648732517187]
We propose a novel generative multi-modal model (GMM) framework for class-incremental learning.
Our approach directly generates labels for images using an adapted generative model.
Under the Few-shot CIL setting, we have improved by at least 14% accuracy over all the current state-of-the-art methods with significantly less forgetting.
arXiv Detail & Related papers (2024-03-27T09:21:07Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Mimic: An adaptive algorithm for multivariate time series classification [11.49627617337276]
Time series data are valuable but are often inscrutable.
Gaining trust in time series classifiers for finance, healthcare, and other critical applications may rely on creating interpretable models.
We propose a novel Mimic algorithm that retains the predictive accuracy of the strongest classifiers while introducing interpretability.
arXiv Detail & Related papers (2021-11-08T04:47:31Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Instance-based Counterfactual Explanations for Time Series
Classification [11.215352918313577]
We advance a novel model-agnostic, case-based technique that generates counterfactual explanations for time series classifiers.
We show that Native Guide generates plausible, proximal, sparse and diverse explanations that are better than those produced by key benchmark counterfactual methods.
arXiv Detail & Related papers (2020-09-28T10:52:48Z) - Complexity Measures and Features for Times Series classification [0.0]
We propose a set of characteristics capable of extracting information on the structure of the time series to face time series classification problems.
The experimental results of our proposal show no statistically significant differences from the second and third best models of the state-of-the-art.
arXiv Detail & Related papers (2020-02-27T11:08:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.