Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text
- URL: http://arxiv.org/abs/2105.03659v1
- Date: Sat, 8 May 2021 10:09:36 GMT
- Title: Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text
- Authors: Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin
Jiang, Ming Zhou and Nan Duan
- Abstract summary: We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
- Score: 65.24325614642223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Logical reasoning of text requires understanding critical logical information
in the text and performing inference over them. Large-scale pre-trained models
for logical reasoning mainly focus on word-level semantics of text while
struggling to capture symbolic logic. In this paper, we propose to understand
logical symbols and expressions in the text to arrive at the answer. Based on
such logical information, we not only put forward a context extension framework
but also propose a data augmentation algorithm. The former extends the context
to cover implicit logical expressions following logical equivalence laws. The
latter augments literally similar but logically different instances to better
capture logical information, especially logical negative and conditional
relationships. We conduct experiments on ReClor dataset. The results show that
our method achieves the state-of-the-art performance, and both logic-driven
context extension framework and data augmentation algorithm can help improve
the accuracy. And our multi-model ensemble system is the first to surpass human
performance on both EASY set and HARD set of ReClor.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation [44.78200830757109]
We propose a PLOG (Pretrained Logical Form Generator) framework to improve the generation fidelity.
PLOG is first pretrained on a table-to-logic-form generation task, then finetuned on downstream table-to-text tasks.
PLOG can learn logical inference from table-logic pairs much more definitely than from table-text pairs.
arXiv Detail & Related papers (2022-05-25T11:55:54Z) - Improving Logical-Level Natural Language Generation with
Topic-Conditioned Data Augmentation and Logical Form Generation [18.93964332724296]
We propose a topic-conditioned data augmentation (TopicDA) to generate logical forms and textual descriptions directly from tables.
We introduce logical form generation (LG), a dual task of Logic2text that requires generating a valid logical form based on a text description of a table.
We also propose a semi-supervised learning approach to jointly train a Logic2text and an LG model with both labeled and augmented data.
arXiv Detail & Related papers (2021-12-12T13:50:18Z) - LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with
Self-training [76.90793623822866]
We propose a unified framework for logical knowledge-conditioned text generation in the few-shot setting.
Our approach leverages self-training and samples pseudo logical forms based on content and structure consistency.
arXiv Detail & Related papers (2021-12-02T16:49:41Z) - Logic-Consistency Text Generation from Semantic Parses [32.543257899910216]
This paper first proposes SNOWBALL, a framework for logic consistent text generation from semantic parses.
Second, we propose a novel automatic metric, BLEC, for evaluating the logical consistency between the semantic parses and generated texts.
arXiv Detail & Related papers (2021-08-02T01:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.