Extending Logic Explained Networks to Text Classification
- URL: http://arxiv.org/abs/2211.09732v1
- Date: Fri, 4 Nov 2022 16:12:03 GMT
- Title: Extending Logic Explained Networks to Text Classification
- Authors: Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini,
Davide Buffelli, Pietro Lio
- Abstract summary: We propose LENp, improving local explanations by perturbing input words, and we test it on text classification.
Our results show that (i) LENp provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) logic explanations are more useful and user-friendly than feature scoring provided by LIME as attested by a human survey.
- Score: 5.289706151390118
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, Logic Explained Networks (LENs) have been proposed as
explainable-by-design neural models providing logic explanations for their
predictions. However, these models have only been applied to vision and tabular
data, and they mostly favour the generation of global explanations, while local
ones tend to be noisy and verbose. For these reasons, we propose LENp,
improving local explanations by perturbing input words, and we test it on text
classification. Our results show that (i) LENp provides better local
explanations than LIME in terms of sensitivity and faithfulness, and (ii) logic
explanations are more useful and user-friendly than feature scoring provided by
LIME as attested by a human survey.
Related papers
- LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - MaNtLE: Model-agnostic Natural Language Explainer [9.43206883360088]
We introduce MaNtLE, a model-agnostic natural language explainer that analyzes multiple classifier predictions.
MaNtLE uses multi-task training on thousands of synthetic classification tasks to generate faithful explanations.
Simulated user studies indicate that, on average, MaNtLE-generated explanations are at least 11% more faithful compared to LIME and Anchors explanations.
arXiv Detail & Related papers (2023-05-22T12:58:06Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic [24.868479255640718]
Natural language inference aims to determine the logical relationship between two sentences among the target labels Entailment, Contradiction, and Neutral.
Deep learning models have become a prevailing approach to NLI, but they lack interpretability and explainability.
In this work, we address the explainability for NLI by weakly supervised logical reasoning.
arXiv Detail & Related papers (2021-09-18T13:04:23Z) - Logic Explained Networks [27.800583434727805]
We show how a mindful design of the networks leads to a family of interpretable deep learning models called Logic Explained Networks (LENs)
LENs only require their inputs to be human-understandable predicates, and they provide explanations in terms of simple First-Order Logic (FOL) formulas.
LENs may yield better classifications than established white-box models, such as decision trees and Bayesian rule lists.
arXiv Detail & Related papers (2021-08-11T10:55:42Z) - On the Veracity of Local, Model-agnostic Explanations in Audio
Classification: Targeted Investigations with Adversarial Examples [5.744593856232663]
Local explanation methods such as LIME have become popular in MIR.
This paper reports on targeted investigations where we try to get more insight into the actual veracity of LIME's explanations.
arXiv Detail & Related papers (2021-07-19T17:54:10Z) - Do Natural Language Explanations Represent Valid Logical Arguments?
Verifying Entailment in Explainable NLI Gold Standards [0.0]
An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales.
While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour.
We propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations.
arXiv Detail & Related papers (2021-05-05T10:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.