Prediction or Comparison: Toward Interpretable Qualitative Reasoning
- URL: http://arxiv.org/abs/2106.02399v1
- Date: Fri, 4 Jun 2021 10:27:55 GMT
- Title: Prediction or Comparison: Toward Interpretable Qualitative Reasoning
- Authors: Mucheng Ren, Heyan Huang and Yang Gao
- Abstract summary: Current approaches use either semantics to transform natural language inputs into logical expressions or a "black-box" model to solve them in one step.
In this work, we categorize qualitative reasoning tasks into two types: prediction and comparison.
In particular, we adopt neural network modules trained in an end-to-end manner to simulate the two reasoning processes.
- Score: 16.02199526395448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Qualitative relationships illustrate how changing one property (e.g., moving
velocity) affects another (e.g., kinetic energy) and constitutes a considerable
portion of textual knowledge. Current approaches use either semantic parsers to
transform natural language inputs into logical expressions or a "black-box"
model to solve them in one step. The former has a limited application range,
while the latter lacks interpretability. In this work, we categorize
qualitative reasoning tasks into two types: prediction and comparison. In
particular, we adopt neural network modules trained in an end-to-end manner to
simulate the two reasoning processes. Experiments on two qualitative reasoning
question answering datasets, QuaRTz and QuaRel, show our methods' effectiveness
and generalization capability, and the intermediate outputs provided by the
modules make the reasoning process interpretable.
Related papers
- Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Interpretable multimodal sentiment analysis based on textual modality
descriptions by using large-scale language models [1.4213973379473654]
Multimodal sentiment analysis is an important area for understanding the user's internal states.
Previous works have attempted to use attention weights or vector distributions to provide interpretability.
This study proposed a novel approach to provide interpretability by converting nonverbal modalities into text descriptions.
arXiv Detail & Related papers (2023-05-07T06:48:06Z) - Disentangling Reasoning Capabilities from Language Models with
Compositional Reasoning Transformers [72.04044221898059]
ReasonFormer is a unified reasoning framework for mirroring the modular and compositional reasoning process of humans.
The representation module (automatic thinking) and reasoning modules (controlled thinking) are disentangled to capture different levels of cognition.
The unified reasoning framework solves multiple tasks with a single model,and is trained and inferred in an end-to-end manner.
arXiv Detail & Related papers (2022-10-20T13:39:55Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - OPERA:Operation-Pivoted Discrete Reasoning over Text [33.36388276371693]
OPERA is an operation-pivoted discrete reasoning framework for machine reading comprehension.
It uses lightweight symbolic operations as neural modules to facilitate the reasoning ability and interpretability.
Experiments on both DROP and RACENum datasets show the reasoning ability of OPERA.
arXiv Detail & Related papers (2022-04-29T15:41:47Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Generating Hierarchical Explanations on Text Classification via Feature
Interaction Detection [21.02924712220406]
We build hierarchical explanations by detecting feature interactions.
Such explanations visualize how words and phrases are combined at different levels of the hierarchy.
Experiments show the effectiveness of the proposed method in providing explanations both faithful to models and interpretable to humans.
arXiv Detail & Related papers (2020-04-04T20:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.