KC-GenRe: A Knowledge-constrained Generative Re-ranking Method Based on Large Language Models for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2403.17532v1
- Date: Tue, 26 Mar 2024 09:36:59 GMT
- Title: KC-GenRe: A Knowledge-constrained Generative Re-ranking Method Based on Large Language Models for Knowledge Graph Completion
- Authors: Yilin Wang, Minghao Hu, Zhen Huang, Dongsheng Li, Dong Yang, Xicheng Lu,
- Abstract summary: We introduce KC-GenRe, a knowledge-constrained generative re-ranking method based on generative large language models.
To overcome the mismatch issue, we formulate the KGC re-ranking task as a candidate identifier sorting generation problem.
To tackle the misordering issue, we develop a knowledge-guided interactive training method that enhances the identification and ranking of candidates.
To address the omission issue, we design a knowledge-augmented constrained inference method that enables contextual prompting and controlled generation.
- Score: 34.81781468398916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of knowledge graph completion (KGC) is to predict missing facts among entities. Previous methods for KGC re-ranking are mostly built on non-generative language models to obtain the probability of each candidate. Recently, generative large language models (LLMs) have shown outstanding performance on several tasks such as information extraction and dialog systems. Leveraging them for KGC re-ranking is beneficial for leveraging the extensive pre-trained knowledge and powerful generative capabilities. However, it may encounter new problems when accomplishing the task, namely mismatch, misordering and omission. To this end, we introduce KC-GenRe, a knowledge-constrained generative re-ranking method based on LLMs for KGC. To overcome the mismatch issue, we formulate the KGC re-ranking task as a candidate identifier sorting generation problem implemented by generative LLMs. To tackle the misordering issue, we develop a knowledge-guided interactive training method that enhances the identification and ranking of candidates. To address the omission issue, we design a knowledge-augmented constrained inference method that enables contextual prompting and controlled generation, so as to obtain valid rankings. Experimental results show that KG-GenRe achieves state-of-the-art performance on four datasets, with gains of up to 6.7% and 7.7% in the MRR and Hits@1 metric compared to previous methods, and 9.0% and 11.1% compared to that without re-ranking. Extensive analysis demonstrates the effectiveness of components in KG-GenRe.
Related papers
- RAKG:Document-level Retrieval Augmented Knowledge Graph Construction [10.013667560362565]
This paper focuses on the task of automatic document-level knowledge graph construction.
It proposes the Document-level Retrieval Augmented Knowledge Graph Construction (RAKG) framework.
arXiv Detail & Related papers (2025-04-14T02:47:23Z) - Rank-R1: Enhancing Reasoning in LLM-based Document Rerankers via Reinforcement Learning [76.50690734636477]
We introduce Rank-R1, a novel LLM-based reranker that performs reasoning over both the user query and candidate documents before performing the ranking task.
Our experiments on the TREC DL and BRIGHT datasets show that Rank-R1 is highly effective, especially for complex queries.
arXiv Detail & Related papers (2025-03-08T03:14:26Z) - Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning [55.6623318085391]
Recent large language model (LLM) reasoning suffers from limited domain knowledge, susceptibility to hallucinations, and constrained reasoning depth.
This paper presents the first investigation into integrating step-wise knowledge graph retrieval with step-wise reasoning.
We propose KG-RAR, a framework centered on process-oriented knowledge graph construction, a hierarchical retrieval strategy, and a universal post-retrieval processing and reward model.
arXiv Detail & Related papers (2025-03-03T15:20:41Z) - GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation [84.41557981816077]
We introduce GFM-RAG, a novel graph foundation model (GFM) for retrieval augmented generation.
GFM-RAG is powered by an innovative graph neural network that reasons over graph structure to capture complex query-knowledge relationships.
It achieves state-of-the-art performance while maintaining efficiency and alignment with neural scaling laws.
arXiv Detail & Related papers (2025-02-03T07:04:29Z) - Fact Finder -- Enhancing Domain Expertise of Large Language Models by Incorporating Knowledge Graphs [2.7386111894524]
We introduce a hybrid system that augments Large Language Models with domain-specific knowledge graphs (KGs)
We evaluate our system on a curated dataset of 69 samples, achieving a precision of 78% in retrieving correct KG nodes.
Our findings indicate that the hybrid system surpasses a standalone LLM in accuracy and completeness.
arXiv Detail & Related papers (2024-08-06T07:45:05Z) - Training Task Experts through Retrieval Based Distillation [55.46054242512261]
We present Retrieval Based Distillation (ReBase), a method that first retrieves data from rich online sources and then transforms them into domain-specific data.
Our method significantly improves performance by up to 7.8% on SQuAD, 1.37% on MNLI, and 1.94% on BigBench-Hard.
arXiv Detail & Related papers (2024-07-07T18:27:59Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Sequencing Matters: A Generate-Retrieve-Generate Model for Building
Conversational Agents [9.191944519634111]
The Georgetown InfoSense group has done in regard to solving the challenges presented by TREC iKAT 2023.
Our submitted runs outperform the median runs by a significant margin, exhibiting superior performance in nDCG across various cut numbers and in overall success rate.
Our solution involves the use of Large Language Models (LLMs) for initial answers, answer grounding by BM25, passage quality filtering by logistic regression, and answer generation by LLMs again.
arXiv Detail & Related papers (2023-11-16T02:37:58Z) - Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA [16.248395545151745]
BYOKG is a universal question-answering (QA) system that can operate on any knowledge graph (KG)
BYOKG draws inspiration from the remarkable ability of humans to comprehend information present in an unseen KG through exploration.
arXiv Detail & Related papers (2023-11-14T02:05:29Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Improving Few-Shot Inductive Learning on Temporal Knowledge Graphs using
Confidence-Augmented Reinforcement Learning [24.338098716004485]
TKGC aims to predict the missing links among the entities in a temporal knwoledge graph (TKG)
Recently, a new task, i.e., TKG few-shot out-of-graph (OOG) link prediction, is proposed.
We propose a TKGC method FITCARL that combines few-shot learning with reinforcement learning to solve this task.
arXiv Detail & Related papers (2023-04-02T20:05:20Z) - Toward Efficient Language Model Pretraining and Downstream Adaptation
via Self-Evolution: A Case Study on SuperGLUE [203.65227947509933]
This report describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks.
arXiv Detail & Related papers (2022-12-04T15:36:18Z) - From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer [41.69537736842654]
We provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model.
We introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose.
arXiv Detail & Related papers (2022-02-04T12:52:32Z) - RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering [57.94658176442027]
We present RnG-KBQA, a Rank-and-Generate approach for KBQA.
We achieve new state-of-the-art results on GrailQA and WebQSP datasets.
arXiv Detail & Related papers (2021-09-17T17:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.