Learning to Select Prototypical Parts for Interpretable Sequential Data
Modeling
- URL: http://arxiv.org/abs/2212.03396v1
- Date: Wed, 7 Dec 2022 01:42:47 GMT
- Title: Learning to Select Prototypical Parts for Interpretable Sequential Data
Modeling
- Authors: Yifei Zhang, Neng Gao, Cunqing Ma
- Abstract summary: We propose a Self-Explaining Selective Model (SESM) that uses a linear combination of prototypical concepts to explain its own predictions.
For better interpretability, we design multiple constraints including diversity, stability, and locality as training objectives.
- Score: 7.376829794171344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prototype-based interpretability methods provide intuitive explanations of
model prediction by comparing samples to a reference set of memorized exemplars
or typical representatives in terms of similarity. In the field of sequential
data modeling, similarity calculations of prototypes are usually based on
encoded representation vectors. However, due to highly recursive functions,
there is usually a non-negligible disparity between the prototype-based
explanations and the original input. In this work, we propose a Self-Explaining
Selective Model (SESM) that uses a linear combination of prototypical concepts
to explain its own predictions. The model employs the idea of case-based
reasoning by selecting sub-sequences of the input that mostly activate
different concepts as prototypical parts, which users can compare to
sub-sequences selected from different example inputs to understand model
decisions. For better interpretability, we design multiple constraints
including diversity, stability, and locality as training objectives. Extensive
experiments in different domains demonstrate that our method exhibits promising
interpretability and competitive accuracy.
Related papers
- Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation [7.372346036256517]
Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable.
We propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning.
Experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models.
arXiv Detail & Related papers (2024-09-14T17:52:59Z) - A prototype-based model for set classification [2.0564549686015594]
A common way to represent a set of vectors is to model them as linear subspaces.
We present a prototype-based approach for learning on the manifold formed from such linear subspaces, the Grassmann manifold.
arXiv Detail & Related papers (2024-08-25T04:29:18Z) - ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts [12.959270094693254]
We introduce ProtoSeg, a novel model for interpretable semantic image segmentation.
To achieve accuracy comparable to baseline methods, we adapt the mechanism of prototypical parts.
We show that ProtoSeg discovers semantic concepts, in contrast to standard segmentation models.
arXiv Detail & Related papers (2023-01-28T19:14:32Z) - ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model [18.537838366377915]
ProtoVAE is a variational autoencoder-based framework that learns class-specific prototypes in an end-to-end manner.
It enforces trustworthiness and diversity by regularizing the representation space and introducing an orthonormality constraint.
arXiv Detail & Related papers (2022-10-15T00:42:13Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Interpreting Language Models with Contrastive Explanations [99.7035899290924]
Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.
Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.
We show that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena.
arXiv Detail & Related papers (2022-02-21T18:32:24Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.