Retrieval, Reasoning, Re-ranking: A Context-Enriched Framework for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2411.08165v1
- Date: Tue, 12 Nov 2024 20:15:58 GMT
- Title: Retrieval, Reasoning, Re-ranking: A Context-Enriched Framework for Knowledge Graph Completion
- Authors: Muzhi Li, Cehao Yang, Chengjin Xu, Xuhui Jiang, Yiyan Qi, Jian Guo, Ho-fung Leung, Irwin King,
- Abstract summary: Existing embedding-based methods rely solely on triples in the Knowledge Graph.
We propose KGR3, a context-enriched framework for KGC.
Experiments on widely used datasets demonstrate that KGR3 consistently improves various KGC methods.
- Score: 36.664300900246424
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Knowledge Graph Completion~(KGC) task aims to infer the missing entity from an incomplete triple. Existing embedding-based methods rely solely on triples in the KG, which is vulnerable to specious relation patterns and long-tail entities. On the other hand, text-based methods struggle with the semantic gap between KG triples and natural language. Apart from triples, entity contexts (e.g., labels, descriptions, aliases) also play a significant role in augmenting KGs. To address these limitations, we propose KGR3, a context-enriched framework for KGC. KGR3 is composed of three modules. Firstly, the Retrieval module gathers supporting triples from the KG, collects plausible candidate answers from a base embedding model, and retrieves context for each related entity. Then, the Reasoning module employs a large language model to generate potential answers for each query triple. Finally, the Re-ranking module combines candidate answers from the two modules mentioned above, and fine-tunes an LLM to provide the best answer. Extensive experiments on widely used datasets demonstrate that KGR3 consistently improves various KGC methods. Specifically, the best variant of KGR3 achieves absolute Hits@1 improvements of 12.3% and 5.6% on the FB15k237 and WN18RR datasets.
Related papers
- KGMEL: Knowledge Graph-Enhanced Multimodal Entity Linking [26.524285614676188]
KGMEL is a novel framework that leverages knowledge-graph triples to enhance entity linking.
It operates in three stages: generation,trieval, andranking.
Experiments on benchmark datasets demonstrate that KGMEL outperforms existing methods.
arXiv Detail & Related papers (2025-04-21T14:38:44Z) - KG-CF: Knowledge Graph Completion with Context Filtering under the Guidance of Large Language Models [55.39134076436266]
KG-CF is a framework tailored for ranking-based knowledge graph completion tasks.
KG-CF leverages LLMs' reasoning abilities to filter out irrelevant contexts, achieving superior results on real-world datasets.
arXiv Detail & Related papers (2025-01-06T01:52:15Z) - GS-KGC: A Generative Subgraph-based Framework for Knowledge Graph Completion with Large Language Models [7.995716933782121]
We propose a novel completion framework called textbfGenerative textbfSubgraph-based KGC (GS-KGC)
This framework primarily includes a subgraph partitioning algorithm designed to generate negatives and neighbors.
Experiments conducted on four common KGC datasets highlight the advantages of the proposed GS-KGC.
arXiv Detail & Related papers (2024-08-20T13:13:41Z) - Retrieval-Augmented Language Model for Extreme Multi-Label Knowledge Graph Link Prediction [2.6749568255705656]
Extrapolation in large language models (LLMs) for open-ended inquiry encounters two pivotal issues.
Existing works attempt to tackle the problem by augmenting the input of a smaller language model with information from a knowledge graph.
We propose a new task, the extreme multi-label KG link prediction task, to enable a model to perform extrapolation with multiple responses.
arXiv Detail & Related papers (2024-05-21T10:10:56Z) - Multi-hop Question Answering over Knowledge Graphs using Large Language Models [1.8130068086063336]
We evaluate the capability of (LLMs) to answer questions over Knowledge graphs that involve multiple hops.
We show that depending upon the size and nature of the KG we need different approaches to extract and feed the relevant information to an LLM.
arXiv Detail & Related papers (2024-04-30T03:31:03Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - DynaSemble: Dynamic Ensembling of Textual and Structure-Based Models for Knowledge Graph Completion [27.068271260978143]
We consider two popular approaches to Knowledge Graph Completion (KGC)
We propose DynaSemble, a novel method for learning query-dependent ensemble weights.
DynaSemble achieves state-of-the-art results on three standard KGC datasets.
arXiv Detail & Related papers (2023-11-07T07:53:06Z) - Separate-and-Aggregate: A Transformer-based Patch Refinement Model for
Knowledge Graph Completion [28.79628925695775]
We propose a novel Transformer-based Patch Refinement Model (PatReFormer) for Knowledge Graph completion.
We conduct experiments on four popular KGC benchmarks, WN18RR, FB15k-237, YAGO37 and DB100K.
The experimental results show significant performance improvement from existing KGC methods on standard KGC evaluation metrics.
arXiv Detail & Related papers (2023-07-11T06:27:13Z) - Graph Reasoning for Question Answering with Triplet Retrieval [33.454090126152714]
We propose a simple yet effective method to retrieve the most relevant triplets from knowledge graphs (KGs)
Our method can outperform state-of-the-art up to 4.6% absolute accuracy.
arXiv Detail & Related papers (2023-05-30T04:46:28Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.