HyKnow: End-to-End Task-Oriented Dialog Modeling with Hybrid Knowledge
Management
- URL: http://arxiv.org/abs/2105.06041v1
- Date: Thu, 13 May 2021 01:58:39 GMT
- Title: HyKnow: End-to-End Task-Oriented Dialog Modeling with Hybrid Knowledge
Management
- Authors: Silin Gao, Ryuichi Takanobu, Wei Peng, Qun Liu, Minlie Huang
- Abstract summary: We propose a TOD system with hybrid knowledge management, HyKnow.
It extends the belief state to manage both structured and unstructured knowledge.
It is the first end-to-end model that jointly optimize modeling grounded on these two kinds of knowledge.
- Score: 58.82499963373537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented dialog (TOD) systems typically manage structured knowledge
(e.g. ontologies and databases) to guide the goal-oriented conversations.
However, they fall short of handling dialog turns grounded on unstructured
knowledge (e.g. reviews and documents). In this paper, we formulate a task of
modeling TOD grounded on both structured and unstructured knowledge. To address
this task, we propose a TOD system with hybrid knowledge management, HyKnow. It
extends the belief state to manage both structured and unstructured knowledge,
and is the first end-to-end model that jointly optimizes dialog modeling
grounded on these two kinds of knowledge. We conduct experiments on the
modified version of MultiWOZ 2.1 dataset, where dialogs are grounded on hybrid
knowledge. Experimental results show that HyKnow has strong end-to-end
performance compared to existing TOD systems. It also outperforms the pipeline
knowledge management schemes, with higher unstructured knowledge retrieval
accuracy.
Related papers
- Knowledge-Retrieval Task-Oriented Dialog Systems with Semi-Supervision [22.249113574918034]
Most existing task-oriented dialog (TOD) systems track dialog states in terms of slots and values and use them to query a database to get relevant knowledge to generate responses.
In real-life applications, user utterances are noisier, and thus it is more difficult to accurately track dialog states and correctly secure relevant knowledge.
Inspired by such progress, we propose a retrieval-based method to enhance knowledge selection in TOD systems, which outperforms the traditional database query method for real-life dialogs.
arXiv Detail & Related papers (2023-05-22T16:29:20Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - OPERA: Harmonizing Task-Oriented Dialogs and Information Seeking
Experience [87.0233567695073]
Existing studies in conversational AI mostly treat task-oriented dialog (TOD) and question answering (QA) as separate tasks.
We propose a new task, Open-Book TOD (OB-TOD), which combines TOD with QA task and expand external knowledge sources.
We propose a unified model OPERA which can appropriately access explicit and implicit external knowledge to tackle the defined task.
arXiv Detail & Related papers (2022-06-24T18:21:26Z) - Open-domain Dialogue Generation Grounded with Dynamic Multi-form
Knowledge Fusion [9.45662259790057]
This paper presents a new dialogue generation model, Dynamic Multi-form Knowledge Fusion based Open-domain Chatt-ing Machine (DMKCM)
DMKCM applies an indexed text (a virtual Knowledge Base) to locate relevant documents as 1st hop and then expands the content of the dialogue and its 1st hop using a commonsense knowledge graph to get apposite triples as 2nd hop.
Experimental results indicate the effectiveness of our method in terms of dialogue coherence and informativeness.
arXiv Detail & Related papers (2022-04-24T10:32:48Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - End-to-End Task-Oriented Dialog Modeling with Semi-Structured Knowledge
Management [40.99595530656065]
Current task-oriented dialog (TOD) systems mostly manage structured knowledge.
They fall short of handling dialogs which also involve unstructured knowledge.
We propose a TOD system with semi-structured knowledge management, SeKnow, which extends the belief state to manage knowledge with both structured and unstructured contents.
arXiv Detail & Related papers (2021-06-22T14:07:22Z) - Unstructured Knowledge Access in Task-oriented Dialog Modeling using
Language Inference, Knowledge Retrieval and Knowledge-Integrative Response
Generation [44.184890645068485]
Dialog systems enriched with external knowledge can handle user queries that are outside the scope of the supporting databases/APIs.
We propose three subsystems, KDEAK, KnowleDgEFactor, and Ens-GPT, which form the pipeline for a task-oriented dialog system.
Experimental results demonstrate that the proposed pipeline system outperforms the baseline and generates high-quality responses.
arXiv Detail & Related papers (2021-01-15T11:24:32Z) - Learning to Retrieve Entity-Aware Knowledge and Generate Responses with
Copy Mechanism for Task-Oriented Dialogue Systems [43.57597820119909]
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9)
This challenge can be separated into three subtasks, (1) knowledge-seeking turn detection, (2) knowledge selection, and (3) knowledge-grounded response generation.
We use pre-trained language models, ELECTRA and RoBERTa, as our base encoder for different subtasks.
arXiv Detail & Related papers (2020-12-22T11:36:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.