Prompt-Time Symbolic Knowledge Capture with Large Language Models
- URL: http://arxiv.org/abs/2402.00414v1
- Date: Thu, 1 Feb 2024 08:15:28 GMT
- Title: Prompt-Time Symbolic Knowledge Capture with Large Language Models
- Authors: Tolga \c{C}\"opl\"u, Arto Bendiken, Andrii Skomorokhov, Eduard
Bateiko, Stephen Cobb, Joshua J. Bouw (Haltia, Inc.)
- Abstract summary: Augmenting large language models (LLMs) with user-specific knowledge is crucial for real-world applications, such as personal AI assistants.
This paper investigates utilizing the existing LLM capabilities to enable prompt-driven knowledge capture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmenting large language models (LLMs) with user-specific knowledge is
crucial for real-world applications, such as personal AI assistants. However,
LLMs inherently lack mechanisms for prompt-driven knowledge capture. This paper
investigates utilizing the existing LLM capabilities to enable prompt-driven
knowledge capture, with a particular emphasis on knowledge graphs. We address
this challenge by focusing on prompt-to-triple (P2T) generation. We explore
three methods: zero-shot prompting, few-shot prompting, and fine-tuning, and
then assess their performance via a specialized synthetic dataset. Our code and
datasets are publicly available at https://github.com/HaltiaAI/paper-PTSKC.
Related papers
- Prompt-Time Ontology-Driven Symbolic Knowledge Capture with Large Language Models [0.0]
This paper explores capturing personal information from user prompts using knowledge-graph approaches.
We use a subset of the KNOW ontology, which models personal information, to train the language model on these concepts.
We then evaluate the success of knowledge capture using a specially constructed dataset.
arXiv Detail & Related papers (2024-05-22T21:40:34Z) - Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts [50.06633829833144]
Large Language Models (LLMs) are effective in performing various NLP tasks, but struggle to handle tasks that require extensive, real-world knowledge.
We propose a benchmark that requires knowledge of long-tail facts for answering the involved questions.
Our experiments show that LLMs alone struggle with answering these questions, especially when the long-tail level is high or rich knowledge is required.
arXiv Detail & Related papers (2024-05-10T15:10:20Z) - Infusing Knowledge into Large Language Models with Contextual Prompts [5.865016596356753]
We propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text.
Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.
arXiv Detail & Related papers (2024-03-03T11:19:26Z) - Learning to Prompt with Text Only Supervision for Vision-Language Models [107.282881515667]
One branch of methods adapts CLIP by learning prompts using visual information.
An alternative approach resorts to training-free methods by generating class descriptions from large language models.
We propose to combine the strengths of both streams by learning prompts using only text data.
arXiv Detail & Related papers (2024-01-04T18:59:49Z) - KnowGPT: Knowledge Graph based Prompting for Large Language Models [28.605161596626875]
We introduce a Knowledge Graph based PrompTing framework, namely KnowGPT, to enhance Large Language Models with domain knowledge.
KnowGPT contains a knowledge extraction module to extract the most informative knowledge from KGs, and a context-aware prompt construction module to automatically convert extracted knowledge into effective prompts.
KnowGPT achieves a 92.6% accuracy on OpenbookQA leaderboard, comparable to human-level performance.
arXiv Detail & Related papers (2023-12-11T07:56:25Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - RET-LLM: Towards a General Read-Write Memory for Large Language Models [53.288356721954514]
RET-LLM is a novel framework that equips large language models with a general write-read memory unit.
Inspired by Davidsonian semantics theory, we extract and save knowledge in the form of triplets.
Our framework exhibits robust performance in handling temporal-based question answering tasks.
arXiv Detail & Related papers (2023-05-23T17:53:38Z) - Augmented Large Language Models with Parametric Knowledge Guiding [72.71468058502228]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) with their impressive language understanding and generation capabilities.
Their performance may be suboptimal for domain-specific tasks that require specialized knowledge due to limited exposure to the related data.
We propose the novel Parametric Knowledge Guiding (PKG) framework, which equips LLMs with a knowledge-guiding module to access relevant knowledge.
arXiv Detail & Related papers (2023-05-08T15:05:16Z) - Self-Prompting Large Language Models for Zero-Shot Open-Domain QA [67.08732962244301]
Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing background documents.
This task becomes notably challenging in a zero-shot setting where no data is available to train tailored retrieval-reader models.
We propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of Large Language Models.
arXiv Detail & Related papers (2022-12-16T18:23:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.