XTE: Explainable Text Entailment
- URL: http://arxiv.org/abs/2009.12431v1
- Date: Fri, 25 Sep 2020 20:49:07 GMT
- Title: XTE: Explainable Text Entailment
- Authors: Vivian S. Silva, Andr\'e Freitas, Siegfried Handschuh
- Abstract summary: Entailment is the task of determining whether a piece of text logically follows from another piece of text.
XTE - Explainable Text Entailment - is a novel composite approach for recognizing text entailment.
- Score: 8.036150169408241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text entailment, the task of determining whether a piece of text logically
follows from another piece of text, is a key component in NLP, providing input
for many semantic applications such as question answering, text summarization,
information extraction, and machine translation, among others. Entailment
scenarios can range from a simple syntactic variation to more complex semantic
relationships between pieces of text, but most approaches try a
one-size-fits-all solution that usually favors some scenario to the detriment
of another. Furthermore, for entailments requiring world knowledge, most
systems still work as a "black box", providing a yes/no answer that does not
explain the underlying reasoning process. In this work, we introduce XTE -
Explainable Text Entailment - a novel composite approach for recognizing text
entailment which analyzes the entailment pair to decide whether it must be
resolved syntactically or semantically. Also, if a semantic matching is
involved, we make the answer interpretable, using external knowledge bases
composed of structured lexical definitions to generate natural language
justifications that explain the semantic relationship holding between the
pieces of text. Besides outperforming well-established entailment algorithms,
our composite approach gives an important step towards Explainable AI, allowing
the inference model interpretation, making the semantic reasoning process
explicit and understandable.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Clash of the Explainers: Argumentation for Context-Appropriate
Explanations [6.8285745209093145]
There is no single approach that is best suited for a given context.
For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation.
We propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest.
arXiv Detail & Related papers (2023-12-12T09:52:30Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Syntactic Complexity Identification, Measurement, and Reduction Through
Controlled Syntactic Simplification [0.0]
We present a classical syntactic dependency-based approach to split and rephrase a compound and complex sentence into a set of simplified sentences.
The paper also introduces an algorithm to identify and measure a sentence's syntactic complexity.
This work is accepted and presented in International workshop on Learning with Knowledge Graphs (IWLKG) at WSDM-2023 Conference.
arXiv Detail & Related papers (2023-04-16T13:13:58Z) - PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and
Entailment Recognition [63.51569687229681]
We argue for the need to recognize the textual entailment relation of each proposition in a sentence individually.
We propose PropSegmEnt, a corpus of over 45K propositions annotated by expert human raters.
Our dataset structure resembles the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topically-aligned document.
arXiv Detail & Related papers (2022-12-21T04:03:33Z) - Dense Paraphrasing for Textual Enrichment [7.6233489924270765]
We define the process of rewriting a textual expression (lexeme or phrase) such that it reduces ambiguity while also making explicit the underlying semantics that is not (necessarily) expressed in the economy of sentence structure as Dense Paraphrasing (DP)
We build the first complete DP dataset, provide the scope and design of the annotation task, and present results demonstrating how this DP process can enrich a source text to improve inferencing and QA task performance.
arXiv Detail & Related papers (2022-10-20T19:58:31Z) - Textual Entailment Recognition with Semantic Features from Empirical
Text Representation [60.31047947815282]
A text entails a hypothesis if and only if the true value of the hypothesis follows the text.
In this paper, we propose a novel approach to identifying the textual entailment relationship between text and hypothesis.
We employ an element-wise Manhattan distance vector-based feature that can identify the semantic entailment relationship between the text-hypothesis pair.
arXiv Detail & Related papers (2022-10-18T10:03:51Z) - Revisiting the Roles of "Text" in Text Games [102.22750109468652]
This paper investigates the roles of text in the face of different reinforcement learning challenges.
We propose a simple scheme to extract relevant contextual information into an approximate state hash.
Such a lightweight plug-in achieves competitive performance with state-of-the-art text agents.
arXiv Detail & Related papers (2022-10-15T21:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.