MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
- URL: http://arxiv.org/abs/2203.00357v1
- Date: Tue, 1 Mar 2022 11:13:00 GMT
- Title: MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
- Authors: Fangkai Jiao, Yangyang Guo, Xuemeng Song, Liqiang Nie
- Abstract summary: We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
- Score: 63.50909998372667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logical reasoning is of vital importance to natural language understanding.
Previous studies either employ graph-based models to incorporate prior
knowledge about logical relations, or introduce symbolic logic into neural
models through data augmentation. These methods, however, heavily depend on
annotated training data, and thus suffer from over-fitting and poor
generalization problems due to the dataset sparsity. To address these two
problems, in this paper, we propose MERIt, a MEta-path guided contrastive
learning method for logical ReasonIng of text, to perform self-supervised
pre-training on abundant unlabeled text data. Two novel strategies serve as
indispensable components of our method. In particular, a strategy based on
meta-path is devised to discover the logical structure in natural texts,
followed by a counterfactual data augmentation strategy to eliminate the
information shortcut induced by pre-training. The experimental results on two
challenging logical reasoning benchmarks, i.e., ReClor and LogiQA, demonstrate
that our method outperforms the SOTA baselines with significant improvements.
Related papers
- Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning [27.224364543134094]
We introduce a novel logic-driven data augmentation approach, AMR-LDA.
AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph.
The modified AMR graphs are subsequently converted back into text to create augmented data.
arXiv Detail & Related papers (2023-05-21T23:16:26Z) - Deep Manifold Learning for Reading Comprehension and Logical Reasoning
Tasks with Polytuplet Loss [0.0]
The current trend in developing machine learning models for reading comprehension and logical reasoning tasks is focused on improving the models' abilities to understand and utilize logical rules.
This work focuses on providing a novel loss function and accompanying model architecture that has more interpretable components than some other models.
Our strategy involves emphasizing relative accuracy over absolute accuracy and can theoretically produce the correct answer with incomplete knowledge.
arXiv Detail & Related papers (2023-04-03T14:48:34Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - Evaluating Relaxations of Logic for Neural Networks: A Comprehensive
Study [17.998891912502092]
We study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem.
We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios.
arXiv Detail & Related papers (2021-07-28T21:16:58Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.