ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning
- URL: http://arxiv.org/abs/2309.01538v3
- Date: Mon, 22 Jan 2024 02:39:17 GMT
- Title: ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning
- Authors: Linhao Luo, Jiaxin Ju, Bo Xiong, Yuan-Fang Li, Gholamreza Haffari,
Shirui Pan
- Abstract summary: We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
- Score: 107.61997887260056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logical rules are essential for uncovering the logical connections between
relations, which could improve reasoning performance and provide interpretable
results on knowledge graphs (KGs). Although there have been many efforts to
mine meaningful logical rules over KGs, existing methods suffer from
computationally intensive searches over the rule space and a lack of
scalability for large-scale KGs. Besides, they often ignore the semantics of
relations which is crucial for uncovering logical connections. Recently, large
language models (LLMs) have shown impressive performance in the field of
natural language processing and various applications, owing to their emergent
ability and generalizability. In this paper, we propose a novel framework,
ChatRule, unleashing the power of large language models for mining logical
rules over knowledge graphs. Specifically, the framework is initiated with an
LLM-based rule generator, leveraging both the semantic and structural
information of KGs to prompt LLMs to generate logical rules. To refine the
generated rules, a rule ranking module estimates the rule quality by
incorporating facts from existing KGs. Last, the ranked rules can be used to
conduct reasoning over KGs. ChatRule is evaluated on four large-scale KGs,
w.r.t. different rule quality metrics and downstream tasks, showing the
effectiveness and scalability of our method.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning [104.92384929827776]
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks.
They lack up-to-date knowledge and experience hallucinations during reasoning.
Knowledge graphs (KGs) offer a reliable source of knowledge for reasoning.
arXiv Detail & Related papers (2023-10-02T10:14:43Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.