Explaining the Deep Natural Language Processing by Mining Textual
Interpretable Features
- URL: http://arxiv.org/abs/2106.06697v1
- Date: Sat, 12 Jun 2021 06:25:09 GMT
- Title: Explaining the Deep Natural Language Processing by Mining Textual
Interpretable Features
- Authors: Francesco Ventura, Salvatore Greco, Daniele Apiletti, Tania
Cerquitelli
- Abstract summary: T-EBAnO is a prediction-local and class-based model-global explanation strategies tailored to deep natural-language models.
It provides an objective, human-readable, domain-specific assessment of the reasons behind the automatic decision-making process.
- Score: 3.819533618886143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the high accuracy offered by state-of-the-art deep natural-language
models (e.g. LSTM, BERT), their application in real-life settings is still
widely limited, as they behave like a black-box to the end-user. Hence,
explainability is rapidly becoming a fundamental requirement of
future-generation data-driven systems based on deep-learning approaches.
Several attempts to fulfill the existing gap between accuracy and
interpretability have been done. However, robust and specialized xAI
(Explainable Artificial Intelligence) solutions tailored to deep
natural-language models are still missing. We propose a new framework, named
T-EBAnO, which provides innovative prediction-local and class-based
model-global explanation strategies tailored to black-box deep natural-language
models. Given a deep NLP model and the textual input data, T-EBAnO provides an
objective, human-readable, domain-specific assessment of the reasons behind the
automatic decision-making process. Specifically, the framework extracts sets of
interpretable features mining the inner knowledge of the model. Then, it
quantifies the influence of each feature during the prediction process by
exploiting the novel normalized Perturbation Influence Relation index at the
local level and the novel Global Absolute Influence and Global Relative
Influence indexes at the global level. The effectiveness and the quality of the
local and global explanations obtained with T-EBAnO are proved on (i) a
sentiment analysis task performed by a fine-tuned BERT model, and (ii) a toxic
comment classification task performed by an LSTM model.
Related papers
- Enhancing adversarial robustness in Natural Language Inference using explanations [41.46494686136601]
We cast the spotlight on the underexplored task of Natural Language Inference (NLI)
We validate the usage of natural language explanation as a model-agnostic defence strategy through extensive experimentation.
We research the correlation of widely used language generation metrics with human perception, in order for them to serve as a proxy towards robust NLI models.
arXiv Detail & Related papers (2024-09-11T17:09:49Z) - SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals [0.0]
This paper introduces SCENE (Soft Counterfactual Evaluation for Natural language Explainability), a novel evaluation method.
By focusing on token-based substitutions, SCENE creates contextually appropriate and semantically meaningful Soft Counterfactuals.
SCENE provides valuable insights into the strengths and limitations of various XAI techniques.
arXiv Detail & Related papers (2024-08-08T16:36:24Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Explaining Language Models' Predictions with High-Impact Concepts [11.47612457613113]
We propose a complete framework for extending concept-based interpretability methods to NLP.
We optimize for features whose existence causes the output predictions to change substantially.
Our method achieves superior results on predictive impact, usability, and faithfulness compared to the baselines.
arXiv Detail & Related papers (2023-05-03T14:48:27Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - A Unified Neural Network Model for Readability Assessment with Feature
Projection and Length-Balanced Loss [17.213602354715956]
We propose a BERT-based model with feature projection and length-balanced loss for readability assessment.
Our model achieves state-of-the-art performances on two English benchmark datasets and one dataset of Chinese textbooks.
arXiv Detail & Related papers (2022-10-19T05:33:27Z) - Under the Microscope: Interpreting Readability Assessment Models for
Filipino [0.0]
We dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation.
Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation.
arXiv Detail & Related papers (2021-10-01T01:27:10Z) - Artificial Text Detection via Examining the Topology of Attention Maps [58.46367297712477]
We propose three novel types of interpretable topological features for this task based on Topological Data Analysis (TDA)
We empirically show that the features derived from the BERT model outperform count- and neural-based baselines up to 10% on three common datasets.
The probing analysis of the features reveals their sensitivity to the surface and syntactic properties.
arXiv Detail & Related papers (2021-09-10T12:13:45Z) - TextFlint: Unified Multilingual Robustness Evaluation Toolkit for
Natural Language Processing [73.16475763422446]
We propose a multilingual robustness evaluation platform for NLP tasks (TextFlint)
It incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis.
TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness.
arXiv Detail & Related papers (2021-03-21T17:20:38Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.