KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph
- URL: http://arxiv.org/abs/2402.11163v1
- Date: Sat, 17 Feb 2024 02:07:49 GMT
- Title: KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph
- Authors: Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yang Song, Chen Zhu, Hengshu
Zhu, Ji-Rong Wen
- Abstract summary: We propose an autonomous LLM-based agent framework, called KG-Agent.
In KG-Agent, we integrate the LLM, multifunctional toolbox, KG-based executor, and knowledge memory.
To guarantee the effectiveness, we leverage program language to formulate the multi-hop reasoning process over the KG.
- Score: 134.8631016845467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we aim to improve the reasoning ability of large language
models (LLMs) over knowledge graphs (KGs) to answer complex questions. Inspired
by existing methods that design the interaction strategy between LLMs and KG,
we propose an autonomous LLM-based agent framework, called KG-Agent, which
enables a small LLM to actively make decisions until finishing the reasoning
process over KGs. In KG-Agent, we integrate the LLM, multifunctional toolbox,
KG-based executor, and knowledge memory, and develop an iteration mechanism
that autonomously selects the tool then updates the memory for reasoning over
KG. To guarantee the effectiveness, we leverage program language to formulate
the multi-hop reasoning process over the KG, and synthesize a code-based
instruction dataset to fine-tune the base LLM. Extensive experiments
demonstrate that only using 10K samples for tuning LLaMA-7B can outperform
state-of-the-art methods using larger LLMs or more data, on both in-domain and
out-domain datasets. Our code and data will be publicly released.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs [72.89652710634051]
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs.
arXiv Detail & Related papers (2024-07-31T06:01:24Z) - EffiQA: Efficient Question-Answering with Strategic Multi-Model Collaboration on Knowledge Graphs [11.323661062578799]
EffiQA consists of three stages: global planning, efficient KG exploration, and self-reflection.
Empirical evidence on multiple KBQA benchmarks shows EffiQA's effectiveness.
We hope the proposed new framework will pave the way for efficient, knowledge-intensive querying.
arXiv Detail & Related papers (2024-06-03T11:56:07Z) - Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond [24.151927600694066]
Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs.
This paper conducts the first comprehensive experiment to investigate how far we have been in applying Large Language Models (LLMs) to generate high-quality commit messages.
arXiv Detail & Related papers (2024-04-23T08:24:43Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - From human experts to machines: An LLM supported approach to ontology
and knowledge graph construction [0.0]
Large Language Models (LLMs) have recently gained popularity for their ability to understand and generate human-like natural language.
This work explores the (semi-)automatic construction of KGs facilitated by open-source LLMs.
arXiv Detail & Related papers (2024-03-13T08:50:15Z) - Large Language Models Can Better Understand Knowledge Graphs Than We Thought [13.336418752729987]
knowledge graph (KG) embeddings with model parameters become increasingly costly.
Current prompting methods often rely on a trial-and-error approach.
We show that unordered linearized triples are more effective for LLMs' understanding of KGs compared to fluent NL text.
arXiv Detail & Related papers (2024-02-18T10:44:03Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.