Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA
- URL: http://arxiv.org/abs/2311.07850v2
- Date: Wed, 22 May 2024 02:46:08 GMT
- Title: Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA
- Authors: Dhruv Agarwal, Rajarshi Das, Sopan Khosla, Rashmi Gangadharaiah,
- Abstract summary: BYOKG is a universal question-answering (QA) system that can operate on any knowledge graph (KG)
BYOKG draws inspiration from the remarkable ability of humans to comprehend information present in an unseen KG through exploration.
- Score: 16.248395545151745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present BYOKG, a universal question-answering (QA) system that can operate on any knowledge graph (KG), requires no human-annotated training data, and can be ready to use within a day -- attributes that are out-of-scope for current KGQA systems. BYOKG draws inspiration from the remarkable ability of humans to comprehend information present in an unseen KG through exploration -- starting at random nodes, inspecting the labels of adjacent nodes and edges, and combining them with their prior world knowledge. In BYOKG, exploration leverages an LLM-backed symbolic agent that generates a diverse set of query-program exemplars, which are then used to ground a retrieval-augmented reasoning procedure to predict programs for arbitrary questions. BYOKG is effective over both small- and large-scale graphs, showing dramatic gains in QA accuracy over a zero-shot baseline of 27.89 and 58.02 F1 on GrailQA and MetaQA, respectively. On GrailQA, we further show that our unsupervised BYOKG outperforms a supervised in-context learning method, demonstrating the effectiveness of exploration. Lastly, we find that performance of BYOKG reliably improves with continued exploration as well as improvements in the base LLM, notably outperforming a state-of-the-art fine-tuned model by 7.08 F1 on a sub-sampled zero-shot split of GrailQA.
Related papers
- Evaluating Knowledge Graph Based Retrieval Augmented Generation Methods under Knowledge Incompleteness [25.74411097212245]
Knowledge Graph based Retrieval-Augmented Generation (KG-RAG) is a technique that enhances Large Language Model (LLM) inference in tasks like Question Answering (QA)
Existing benchmarks do not adequately capture the impact of KG incompleteness on KG-RAG performance.
We demonstrate that KG-RAG methods are sensitive to KG incompleteness, highlighting the need for more robust approaches in realistic settings.
arXiv Detail & Related papers (2025-04-07T15:08:03Z) - KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented Generation Framework for Temporal Reasoning [18.96570718233786]
GraphRAG has proven highly effective in enhancing the performance of Large Language Models (LLMs) on tasks that require external knowledge.
This paper presents Knowledge Graph-Based Iterative Retrieval-Augmented Generation (KG-IRAG), a novel framework that integrates KGs with iterative reasoning.
Three new datasets are formed to evaluate KG-IRAG's performance, demonstrating its potential beyond traditional RAG applications.
arXiv Detail & Related papers (2025-03-18T13:11:43Z) - GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation [84.41557981816077]
We introduce GFM-RAG, a novel graph foundation model (GFM) for retrieval augmented generation.
GFM-RAG is powered by an innovative graph neural network that reasons over graph structure to capture complex query-knowledge relationships.
It achieves state-of-the-art performance while maintaining efficiency and alignment with neural scaling laws.
arXiv Detail & Related papers (2025-02-03T07:04:29Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Graphusion: A RAG Framework for Knowledge Graph Construction with a Global Perspective [13.905336639352404]
This work introduces Graphusion, a zero-shot Knowledge Graph framework from free text.
It contains three steps: in Step 1, we extract a list of seed entities using topic modeling to guide the final KG includes the most relevant entities.
In Step 2, we conduct candidate triplet extraction using LLMs; in Step 3, we design the novel fusion module that provides a global view of the extracted knowledge.
arXiv Detail & Related papers (2024-10-23T06:54:03Z) - Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - Graphusion: Leveraging Large Language Models for Scientific Knowledge Graph Fusion and Construction in NLP Education [14.368011453534596]
We introduce Graphusion, a zero-shot knowledge graph framework from free text.
The core fusion module provides a global view of triplets, incorporating entity merging, conflict resolution, and novel triplet discovery.
Our evaluation demonstrates that Graphusion surpasses supervised baselines by up to 10% in accuracy on link prediction.
arXiv Detail & Related papers (2024-07-15T15:13:49Z) - GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning [21.057810495833063]
We introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style.
In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information.
Experiments show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks.
arXiv Detail & Related papers (2024-05-30T15:14:24Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - KC-GenRe: A Knowledge-constrained Generative Re-ranking Method Based on Large Language Models for Knowledge Graph Completion [34.81781468398916]
We introduce KC-GenRe, a knowledge-constrained generative re-ranking method based on generative large language models.
To overcome the mismatch issue, we formulate the KGC re-ranking task as a candidate identifier sorting generation problem.
To tackle the misordering issue, we develop a knowledge-guided interactive training method that enhances the identification and ranking of candidates.
To address the omission issue, we design a knowledge-augmented constrained inference method that enables contextual prompting and controlled generation.
arXiv Detail & Related papers (2024-03-26T09:36:59Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks [81.03233931066009]
It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
arXiv Detail & Related papers (2021-04-20T18:23:45Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.