Dynamically Retrieving Knowledge via Query Generation for informative
dialogue response
- URL: http://arxiv.org/abs/2208.00128v1
- Date: Sat, 30 Jul 2022 03:05:43 GMT
- Title: Dynamically Retrieving Knowledge via Query Generation for informative
dialogue response
- Authors: Zhongtian Hu, Yangqi Chen, Yushuang Liu and Lifang Wang
- Abstract summary: We design a knowledge-driven dialogue system named DRKQG (emphDynamically Retrieving Knowledge via Query Generation for informative dialogue response)
First, a time-aware mechanism is utilized to capture context information and a query can be generated for retrieving knowledge.
Then, we integrate copy Mechanism and Transformers, which allows the response generation module produces responses derived from the context and retrieved knowledge.
- Score: 7.196104022425989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-driven dialogue generation has recently made remarkable
breakthroughs. Compared with general dialogue systems, superior
knowledge-driven dialogue systems can generate more informative and
knowledgeable responses with pre-provided knowledge. However, in practical
applications, the dialogue system cannot be provided with corresponding
knowledge in advance. In order to solve the problem, we design a
knowledge-driven dialogue system named DRKQG (\emph{Dynamically Retrieving
Knowledge via Query Generation for informative dialogue response}).
Specifically, the system can be divided into two modules: query generation
module and dialogue generation module. First, a time-aware mechanism is
utilized to capture context information and a query can be generated for
retrieving knowledge. Then, we integrate copy Mechanism and Transformers, which
allows the response generation module produces responses derived from the
context and retrieved knowledge. Experimental results at LIC2022, Language and
Intelligence Technology Competition, show that our module outperforms the
baseline model by a large margin on automatic evaluation metrics, while human
evaluation by Baidu Linguistics team shows that our system achieves impressive
results in Factually Correct and Knowledgeable.
Related papers
- Improving Factual Consistency for Knowledge-Grounded Dialogue Systems
via Knowledge Enhancement and Alignment [77.56326872997407]
Pretrained language models (PLMs) based knowledge-grounded dialogue systems are prone to generate responses that are factually inconsistent with the provided knowledge source.
Inspired by previous work which identified that feed-forward networks (FFNs) within Transformers are responsible for factual knowledge expressions, we investigate two methods to efficiently improve the factual expression capability.
arXiv Detail & Related papers (2023-10-12T14:44:05Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Search-Engine-augmented Dialogue Response Generation with Cheaply
Supervised Query Production [98.98161995555485]
We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.
As the core module, a query producer is used to generate queries from a dialogue context to interact with a search engine.
Experiments show that our query producer can achieve R@1 and R@5 rates of 62.4% and 74.8% for retrieving gold knowledge.
arXiv Detail & Related papers (2023-02-16T01:58:10Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - Reason first, then respond: Modular Generation for Knowledge-infused
Dialogue [43.64093692715295]
Large language models can produce fluent dialogue but often hallucinate factual inaccuracies.
We propose a modular model, Knowledge to Response, for incorporating knowledge into conversational agents.
In detailed experiments, we find that such a model hallucinates less in knowledge-grounded dialogue tasks.
arXiv Detail & Related papers (2021-11-09T15:29:43Z) - Retrieval-Free Knowledge-Grounded Dialogue Response Generation with
Adapters [52.725200145600624]
We propose KnowExpert to bypass the retrieval process by injecting prior knowledge into the pre-trained language models with lightweight adapters.
Experimental results show that KnowExpert performs comparably with the retrieval-based baselines.
arXiv Detail & Related papers (2021-05-13T12:33:23Z) - Prediction, Selection, and Generation: Exploration of Knowledge-Driven
Conversation System [24.537862151735006]
In open-domain conversational systems, it is important but challenging to leverage background knowledge.
We combine the knowledge bases and pre-training model to propose a knowledge-driven conversation system.
We study the performance factors that maybe affect the generation of knowledge-driven dialogue.
arXiv Detail & Related papers (2021-04-23T07:59:55Z) - BERT-CoQAC: BERT-based Conversational Question Answering in Context [10.811729691130349]
We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
arXiv Detail & Related papers (2021-04-23T03:05:17Z) - Learning to Retrieve Entity-Aware Knowledge and Generate Responses with
Copy Mechanism for Task-Oriented Dialogue Systems [43.57597820119909]
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9)
This challenge can be separated into three subtasks, (1) knowledge-seeking turn detection, (2) knowledge selection, and (3) knowledge-grounded response generation.
We use pre-trained language models, ELECTRA and RoBERTa, as our base encoder for different subtasks.
arXiv Detail & Related papers (2020-12-22T11:36:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.