Discourse-Aware Graph Networks for Textual Logical Reasoning
- URL: http://arxiv.org/abs/2207.01450v2
- Date: Wed, 19 Apr 2023 03:53:59 GMT
- Title: Discourse-Aware Graph Networks for Textual Logical Reasoning
- Authors: Yinya Huang, Lemao Liu, Kun Xu, Meng Fang, Liang Lin, and Xiaodan
Liang
- Abstract summary: Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
- Score: 142.0097357999134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Textual logical reasoning, especially question-answering (QA) tasks with
logical reasoning, requires awareness of particular logical structures. The
passage-level logical relations represent entailment or contradiction between
propositional units (e.g., a concluding sentence). However, such structures are
unexplored as current QA systems focus on entity-based relations. In this work,
we propose logic structural-constraint modeling to solve the logical reasoning
QA and introduce discourse-aware graph networks (DAGNs). The networks first
construct logic graphs leveraging in-line discourse connectives and generic
logic theories, then learn logic representations by end-to-end evolving the
logic relations with an edge-reasoning mechanism and updating the graph
features. This pipeline is applied to a general encoder, whose fundamental
features are joined with the high-level logic features for answer prediction.
Experiments on three textual logical reasoning datasets demonstrate the
reasonability of the logical structures built in DAGNs and the effectiveness of
the learned logic features. Moreover, zero-shot transfer results show the
features' generality to unseen logical texts.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs [67.043747188954]
We propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs.
It encodes linearized query structures and entities using pre-trained language models to find answers.
We conduct experiments on two inductive logical reasoning datasets and three transductive datasets.
arXiv Detail & Related papers (2023-05-23T01:25:29Z) - Logiformer: A Two-Branch Graph Transformer Network for Interpretable
Logical Reasoning [10.716971124214332]
We propose an end-to-end model Logiformer which utilizes a two-branch graph transformer network for logical reasoning of text.
We introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively.
The reasoning process provides the interpretability by employing the logical units, which are consistent with human cognition.
arXiv Detail & Related papers (2022-05-02T08:34:59Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - Public Announcement Logic in HOL [0.0]
shallow semantical embedding for public announcement logic with relativized common knowledge is presented.
This embedding enables the first-time automation of this logic with off-the-shelf theorem provers for classical higher-order logic.
arXiv Detail & Related papers (2020-10-02T06:46:02Z) - VQA-LOL: Visual Question Answering under the Lens of Logic [58.30291671877342]
We investigate whether visual question answering systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions.
We construct an augmentation of the VQA dataset as a benchmark, with questions containing logical compositions and linguistic transformations.
We propose our Lens of Logic (LOL) model which uses question-attention and logic-attention to understand logical connectives in the question, and a novel Fr'echet-Compatibility Loss.
arXiv Detail & Related papers (2020-02-19T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.