LLM as Prompter: Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs
- URL: http://arxiv.org/abs/2402.11804v3
- Date: Wed, 19 Jun 2024 09:00:53 GMT
- Title: LLM as Prompter: Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs
- Authors: Kai Wang, Yuwei Xu, Zhiyong Wu, Siqiang Luo,
- Abstract summary: A critical challenge of Knowledge Graph inductive reasoning is handling low-resource scenarios with scarcity in both textual and structural aspects.
We utilize Large Language Models (LLMs) to generate a graph-structural prompt to enhance the pre-trained Graph Neural Networks (GNNs)
On the methodological side, we introduce a novel pretraining and prompting framework ProLINK, designed for low-resource inductive reasoning across arbitrary KGs.
- Score: 20.201820122052897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph (KG) inductive reasoning, which aims to infer missing facts from new KGs that are not seen during training, has been widely adopted in various applications. One critical challenge of KG inductive reasoning is handling low-resource scenarios with scarcity in both textual and structural aspects. In this paper, we attempt to address this challenge with Large Language Models (LLMs). Particularly, we utilize the state-of-the-art LLMs to generate a graph-structural prompt to enhance the pre-trained Graph Neural Networks (GNNs), which brings us new methodological insights into the KG inductive reasoning methods, as well as high generalizability in practice. On the methodological side, we introduce a novel pretraining and prompting framework ProLINK, designed for low-resource inductive reasoning across arbitrary KGs without requiring additional training. On the practical side, we experimentally evaluate our approach on 36 low-resource KG datasets and find that ProLINK outperforms previous methods in three-shot, one-shot, and zero-shot reasoning tasks, exhibiting average performance improvements by 20%, 45%, and 147%, respectively. Furthermore, ProLINK demonstrates strong robustness for various LLM promptings as well as full-shot scenarios.
Related papers
- Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning [55.6623318085391]
Recent large language model (LLM) reasoning suffers from limited domain knowledge, susceptibility to hallucinations, and constrained reasoning depth.
This paper presents the first investigation into integrating step-wise knowledge graph retrieval with step-wise reasoning.
We propose KG-RAR, a framework centered on process-oriented knowledge graph construction, a hierarchical retrieval strategy, and a universal post-retrieval processing and reward model.
arXiv Detail & Related papers (2025-03-03T15:20:41Z) - Paths-over-Graph: Knowledge Graph Empowered Large Language Model Reasoning [19.442426875488675]
We propose Paths-over-Graph (PoG), a novel method that enhances Large Language Models (LLMs) reasoning by integrating knowledge reasoning paths from KGs.
PoG tackles multi-hop and multi-entity questions through a three-phase dynamic multi-hop path exploration.
In experiments, PoG with GPT-3.5-Turbo surpasses ToG with GPT-4 by up to 23.9%.
arXiv Detail & Related papers (2024-10-18T06:57:19Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning method that merges parametric and non-parametric memories to improve accurate reasoning with minimal external input.
GIVE guides the LLM agent to select the most pertinent expert data (observe), engage in query-specific divergent thinking (reflect), and then synthesize this information to produce the final output (speak)
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering [38.31983923708175]
We propose Dual-Reasoning, a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on Knowledge Graphs (KGs)
We show that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.
arXiv Detail & Related papers (2024-06-03T09:38:28Z) - Path-based Explanation for Knowledge Graph Completion [17.541247786437484]
Proper explanations for the results of GNN-based Knowledge Graph Completion models increase model transparency.
Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches.
We propose Power-Link, the first path-based KGC explainer that explores GNN-based models.
arXiv Detail & Related papers (2024-01-04T14:19:37Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning [104.92384929827776]
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks.
They lack up-to-date knowledge and experience hallucinations during reasoning.
Knowledge graphs (KGs) offer a reliable source of knowledge for reasoning.
arXiv Detail & Related papers (2023-10-02T10:14:43Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Faithful Explanations of Black-box NLP Models Using LLM-generated
Counterfactuals [67.64770842323966]
Causal explanations of predictions of NLP systems are essential to ensure safety and establish trust.
Existing methods often fall short of explaining model predictions effectively or efficiently.
We propose two approaches for counterfactual (CF) approximation.
arXiv Detail & Related papers (2023-10-01T07:31:04Z) - Information Extraction in Low-Resource Scenarios: Survey and Perspective [56.5556523013924]
Information Extraction seeks to derive structured information from unstructured texts.
This paper presents a review of neural approaches to low-resource IE from emphtraditional and emphLLM-based perspectives.
arXiv Detail & Related papers (2022-02-16T13:44:00Z) - MPLR: a novel model for multi-target learning of logical rules for
knowledge graph reasoning [5.499688003232003]
We study the problem of learning logic rules for reasoning on knowledge graphs for completing missing factual triplets.
We propose a model called MPLR that improves the existing models to fully use training data and multi-target scenarios are considered.
Experimental results empirically demonstrate that our MPLR model outperforms state-of-the-art methods on five benchmark datasets.
arXiv Detail & Related papers (2021-12-12T09:16:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.