Logiformer: A Two-Branch Graph Transformer Network for Interpretable
Logical Reasoning
- URL: http://arxiv.org/abs/2205.00731v1
- Date: Mon, 2 May 2022 08:34:59 GMT
- Title: Logiformer: A Two-Branch Graph Transformer Network for Interpretable
Logical Reasoning
- Authors: Fangzhi Xu, Qika Lin, Jun Liu, Yudai Pan, Lingling Zhang
- Abstract summary: We propose an end-to-end model Logiformer which utilizes a two-branch graph transformer network for logical reasoning of text.
We introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively.
The reasoning process provides the interpretability by employing the logical units, which are consistent with human cognition.
- Score: 10.716971124214332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine reading comprehension has aroused wide concerns, since it explores
the potential of model for text understanding. To further equip the machine
with the reasoning capability, the challenging task of logical reasoning is
proposed. Previous works on logical reasoning have proposed some strategies to
extract the logical units from different aspects. However, there still remains
a challenge to model the long distance dependency among the logical units.
Also, it is demanding to uncover the logical structures of the text and further
fuse the discrete logic to the continuous text embedding. To tackle the above
issues, we propose an end-to-end model Logiformer which utilizes a two-branch
graph transformer network for logical reasoning of text. Firstly, we introduce
different extraction strategies to split the text into two sets of logical
units, and construct the logical graph and the syntax graph respectively. The
logical graph models the causal relations for the logical branch while the
syntax graph captures the co-occurrence relations for the syntax branch.
Secondly, to model the long distance dependency, the node sequence from each
graph is fed into the fully connected graph transformer structures. The two
adjacent matrices are viewed as the attention biases for the graph transformer
layers, which map the discrete logical structures to the continuous text
embedding space. Thirdly, a dynamic gate mechanism and a question-aware
self-attention module are introduced before the answer prediction to update the
features. The reasoning process provides the interpretability by employing the
logical units, which are consistent with human cognition. The experimental
results show the superiority of our model, which outperforms the
state-of-the-art single model on two logical reasoning benchmarks.
Related papers
- PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering [27.50008553118866]
We model the logical reasoning task by transforming each logical sample into reasoning paths.
To expand the diversity of the logical samples, we propose an atom extension strategy supported by equivalent logical formulas.
Experiments show that PathReasoner achieves competitive performances on two logical reasoning benchmarks and great generalization abilities.
arXiv Detail & Related papers (2024-05-29T14:14:05Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine
Reading Comprehension [21.741085513119785]
Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text.
We present a neural-symbolic approach which, to predict an answer, passes messages over a graph representing logical relations between text units.
arXiv Detail & Related papers (2022-03-16T23:51:01Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - LogicalFactChecker: Leveraging Logical Operations for Fact Checking with
Graph Module Network [111.24773949467567]
We propose LogicalFactChecker, a neural network approach capable of leveraging logical operations for fact checking.
It achieves the state-of-the-art performance on TABFACT, a large-scale, benchmark dataset.
arXiv Detail & Related papers (2020-04-28T17:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.