Learning Logic Rules for Document-level Relation Extraction
- URL: http://arxiv.org/abs/2111.05407v1
- Date: Tue, 9 Nov 2021 20:32:30 GMT
- Title: Learning Logic Rules for Document-level Relation Extraction
- Authors: Dongyu Ru and Changzhi Sun and Jiangtao Feng and Lin Qiu and Hao Zhou
and Weinan Zhang and Yong Yu and Lei Li
- Abstract summary: We propose LogiRE, a novel probabilistic model for document-level relation extraction by learning logic rules.
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
By introducing logic rules into neural networks, LogiRE can explicitly capture long-range dependencies as well as enjoy better interpretation.
- Score: 41.442030707813636
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Document-level relation extraction aims to identify relations between
entities in a whole document. Prior efforts to capture long-range dependencies
have relied heavily on implicitly powerful representations learned through
(graph) neural networks, which makes the model less transparent. To tackle this
challenge, in this paper, we propose LogiRE, a novel probabilistic model for
document-level relation extraction by learning logic rules. LogiRE treats logic
rules as latent variables and consists of two modules: a rule generator and a
relation extractor. The rule generator is to generate logic rules potentially
contributing to final predictions, and the relation extractor outputs final
predictions based on the generated logic rules. Those two modules can be
efficiently optimized with the expectation-maximization (EM) algorithm. By
introducing logic rules into neural networks, LogiRE can explicitly capture
long-range dependencies as well as enjoy better interpretation. Empirical
results show that LogiRE significantly outperforms several strong baselines in
terms of relation performance (1.8 F1 score) and logical consistency (over 3.3
logic score). Our code is available at https://github.com/rudongyu/LogiRE.
Related papers
- Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.