Large Language Models as Source Planner for Personalized
Knowledge-grounded Dialogue
- URL: http://arxiv.org/abs/2310.08840v1
- Date: Fri, 13 Oct 2023 03:38:38 GMT
- Title: Large Language Models as Source Planner for Personalized
Knowledge-grounded Dialogue
- Authors: Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang,
Yasheng Wang, Wai-Chung Kwan, Irwin King, Kam-Fai Wong
- Abstract summary: SAFARI is a novel framework for planning, understanding, and incorporating under both supervised and unsupervised settings.
We construct a personalized knowledge-grounded dialogue dataset textittextbfKnowledge textbfBehind textbfPersona(textbfKBP)
Experimental results on the KBP dataset demonstrate that the SAFARI framework can effectively produce persona-consistent and knowledge-enhanced responses.
- Score: 72.26474540602517
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Open-domain dialogue system usually requires different sources of knowledge
to generate more informative and evidential responses. However, existing
knowledge-grounded dialogue systems either focus on a single knowledge source
or overlook the dependency between multiple sources of knowledge, which may
result in generating inconsistent or even paradoxical responses. To incorporate
multiple knowledge sources and dependencies between them, we propose SAFARI, a
novel framework that leverages the exceptional capabilities of large language
models (LLMs) in planning, understanding, and incorporating under both
supervised and unsupervised settings. Specifically, SAFARI decouples the
knowledge grounding into multiple sources and response generation, which allows
easy extension to various knowledge sources including the possibility of not
using any sources. To study the problem, we construct a personalized
knowledge-grounded dialogue dataset \textit{\textbf{K}nowledge \textbf{B}ehind
\textbf{P}ersona}~(\textbf{KBP}), which is the first to consider the dependency
between persona and implicit knowledge. Experimental results on the KBP dataset
demonstrate that the SAFARI framework can effectively produce
persona-consistent and knowledge-enhanced responses.
Related papers
- A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation [51.31429493814664]
We present a benchmark named multi-source Wizard of Wikipedia for evaluating multi-source dialogue knowledge selection and response generation.
We propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources.
arXiv Detail & Related papers (2024-03-06T06:54:02Z) - Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning [10.839645156881573]
We introduce a novel semi-structured prompting approach that seamlessly integrates the model's parametric memory with unstructured knowledge from text documents and structured knowledge from knowledge graphs.
Experimental results on open-domain multi-hop question answering datasets demonstrate that our prompting method significantly surpasses existing techniques.
arXiv Detail & Related papers (2023-11-14T19:53:53Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog [12.081212540168055]
We present a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance.
In line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources.
We demonstrate that our model is robust to perturbations to knowledge modality (source of information) and that it can fuse information from structured as well as unstructured knowledge to generate responses.
arXiv Detail & Related papers (2022-10-13T18:49:59Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.