ProtoTEx: Explaining Model Decisions with Prototype Tensors
- URL: http://arxiv.org/abs/2204.05426v2
- Date: Mon, 23 May 2022 00:34:37 GMT
- Title: ProtoTEx: Explaining Model Decisions with Prototype Tensors
- Authors: Anubrata Das and Chitrank Gupta and Venelin Kovatchev and Matthew
Lease and Junyi Jessy Li
- Abstract summary: ProtoTEx is a novel white-box NLP classification architecture based on prototype networks.
We describe a novel interleaved training algorithm that effectively handles classes characterized by the absence of indicative features.
On a propaganda detection task, ProtoTEx accuracy matches BART-large and exceeds BERT-large with the added benefit of providing faithful explanations.
- Score: 27.779971257213553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present ProtoTEx, a novel white-box NLP classification architecture based
on prototype networks. ProtoTEx faithfully explains model decisions based on
prototype tensors that encode latent clusters of training examples. At
inference time, classification decisions are based on the distances between the
input text and the prototype tensors, explained via the training examples most
similar to the most influential prototypes. We also describe a novel
interleaved training algorithm that effectively handles classes characterized
by the absence of indicative features. On a propaganda detection task, ProtoTEx
accuracy matches BART-large and exceeds BERT-large with the added benefit of
providing faithful explanations. A user study also shows that prototype-based
explanations help non-experts to better recognize propaganda in online news.
Related papers
- Advancing Interpretability in Text Classification through Prototype Learning [1.9526476410335776]
ProtoLens is a prototype-based model that provides fine-grained, sub-sentence level interpretability for text classification.
ProtoLens uses a Prototype-aware Span Extraction module to identify relevant text spans.
ProtoLens provides interpretable predictions while maintaining competitive accuracy.
arXiv Detail & Related papers (2024-10-23T03:53:46Z) - Sparse Prototype Network for Explainable Pedestrian Behavior Prediction [60.80524827122901]
We present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose.
Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features.
arXiv Detail & Related papers (2024-10-16T03:33:40Z) - GAProtoNet: A Multi-head Graph Attention-based Prototypical Network for Interpretable Text Classification [1.170190320889319]
We introduce GAProtoNet, a novel white-box Multi-head Graph Attention-based Prototypical Network.
Our approach achieves superior results without sacrificing the accuracy of the original black-box LMs.
Our case study and visualization of prototype clusters also demonstrate the efficiency in explaining the decisions of black-box models built with LMs.
arXiv Detail & Related papers (2024-09-20T08:15:17Z) - Enhanced Prototypical Part Network (EPPNet) For Explainable Image Classification Via Prototypes [16.528373143163275]
We introduce the Enhanced Prototypical Part Network (EPPNet) for image classification.
EPPNet achieves strong performance while discovering relevant prototypes that can be used to explain the classification results.
Our evaluations on the CUB-200-2011 dataset show that the EPPNet outperforms state-of-the-art xAI-based methods.
arXiv Detail & Related papers (2024-08-08T17:26:56Z) - MProto: Multi-Prototype Network with Denoised Optimal Transport for
Distantly Supervised Named Entity Recognition [75.87566793111066]
We propose a noise-robust prototype network named MProto for the DS-NER task.
MProto represents each entity type with multiple prototypes to characterize the intra-class variance.
To mitigate the noise from incomplete labeling, we propose a novel denoised optimal transport (DOT) algorithm.
arXiv Detail & Related papers (2023-10-12T13:02:34Z) - Learning Support and Trivial Prototypes for Interpretable Image
Classification [19.00622056840535]
Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification.
We aim to improve the classification of ProtoPNet with a new method to learn support prototypes that lie near the classification boundary in the feature space.
arXiv Detail & Related papers (2023-01-08T09:27:41Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Prototype Completion for Few-Shot Learning [13.63424509914303]
Few-shot learning aims to recognize novel classes with few examples.
Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning.
We propose a novel prototype completion based meta-learning framework.
arXiv Detail & Related papers (2021-08-11T03:44:00Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.