Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph
through AI Chain
- URL: http://arxiv.org/abs/2309.16134v1
- Date: Thu, 28 Sep 2023 03:31:01 GMT
- Title: Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph
through AI Chain
- Authors: Qing Huang, Zhenyu Wan, Zhenchang Xing, Changjing Wang, Jieshan Chen,
Xiwei Xu, Qinghua Lu
- Abstract summary: We propose a knowledge-guided query clarification approach for API recommendation.
We use a large language model (LLM) guided by knowledge graph (KG) to overcome out-of-vocabulary (OOV) failures.
Our approach is designed as an AI chain that consists of five steps, each handled by a separate LLM call.
- Score: 21.27256145010061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: API recommendation methods have evolved from literal and semantic keyword
matching to query expansion and query clarification. The latest query
clarification method is knowledge graph (KG)-based, but limitations include
out-of-vocabulary (OOV) failures and rigid question templates. To address these
limitations, we propose a novel knowledge-guided query clarification approach
for API recommendation that leverages a large language model (LLM) guided by
KG. We utilize the LLM as a neural knowledge base to overcome OOV failures,
generating fluent and appropriate clarification questions and options. We also
leverage the structured API knowledge and entity relationships stored in the KG
to filter out noise, and transfer the optimal clarification path from KG to the
LLM, increasing the efficiency of the clarification process. Our approach is
designed as an AI chain that consists of five steps, each handled by a separate
LLM call, to improve accuracy, efficiency, and fluency for query clarification
in API recommendation. We verify the usefulness of each unit in our AI chain,
which all received high scores close to a perfect 5. When compared to the
baselines, our approach shows a significant improvement in MRR, with a maximum
increase of 63.9% higher when the query statement is covered in KG and 37.2%
when it is not. Ablation experiments reveal that the guidance of knowledge in
the KG and the knowledge-guided pathfinding strategy are crucial for our
approach's performance, resulting in a 19.0% and 22.2% increase in MAP,
respectively. Our approach demonstrates a way to bridge the gap between KG and
LLM, effectively compensating for the strengths and weaknesses of both.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Paths-over-Graph: Knowledge Graph Empowered Large Language Model Reasoning [19.442426875488675]
We propose Paths-over-Graph (PoG), a novel method that enhances Large Language Models (LLMs) reasoning by integrating knowledge reasoning paths from KGs.
PoG tackles multi-hop and multi-entity questions through a three-phase dynamic multi-hop path exploration.
In experiments, PoG with GPT-3.5-Turbo surpasses ToG with GPT-4 by up to 23.9%.
arXiv Detail & Related papers (2024-10-18T06:57:19Z) - Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs [72.89652710634051]
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs.
arXiv Detail & Related papers (2024-07-31T06:01:24Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph [134.8631016845467]
We propose an autonomous LLM-based agent framework, called KG-Agent.
In KG-Agent, we integrate the LLM, multifunctional toolbox, KG-based executor, and knowledge memory.
To guarantee the effectiveness, we leverage program language to formulate the multi-hop reasoning process over the KG.
arXiv Detail & Related papers (2024-02-17T02:07:49Z) - Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering [23.88063210973303]
We propose a framework that combines Pseudo-Graph Generation and Atomic Knowledge Verification.
Compared to the baseline, this approach yields a minimum improvement of 11.5 in the ROUGE-L score for open-ended questions.
arXiv Detail & Related papers (2024-02-15T12:20:02Z) - Evaluating and Enhancing Large Language Models for Conversational
Reasoning on Knowledge Graphs [15.480976967871632]
We evaluate the conversational reasoning capabilities of the current state-of-the-art large language model (GPT-4) on knowledge graphs (KGs)
We introduce LLM-ARK, a grounded KG reasoning agent designed to deliver precise and adaptable predictions on KG paths.
LLaMA-2-7B-ARK outperforms the current state-of-the-art model by 5.28 percentage points, with a performance rate of 36.39% on the target@1 evaluation metric.
arXiv Detail & Related papers (2023-12-18T15:23:06Z) - Let's Discover More API Relations: A Large Language Model-based AI Chain
for Unsupervised API Relation Inference [19.05884373802318]
We propose utilizing large language models (LLMs) as a neural knowledge base for API relation inference.
This approach leverages the entire Web used to pre-train LLMs as a knowledge base and is insensitive to the context and complexity of input texts.
We achieve an average F1 value of 0.76 under the three datasets, significantly higher than the state-of-the-art method's average F1 value of 0.40.
arXiv Detail & Related papers (2023-11-02T14:25:00Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.