Converse, Focus and Guess -- Towards Multi-Document Driven Dialogue
- URL: http://arxiv.org/abs/2102.02435v1
- Date: Thu, 4 Feb 2021 06:36:11 GMT
- Title: Converse, Focus and Guess -- Towards Multi-Document Driven Dialogue
- Authors: Han Liu, Caixia Yuan, Xiaojie Wang, Yushu Yang, Huixing Jiang,
Zhongyuan Wang
- Abstract summary: We propose a novel task, Multi-Document Driven Dialogue (MD3), in which an agent can guess the target document that the user is interested in by leading a dialogue.
GuessMovie contains 16,881 documents, each describing a movie, and associated 13,434 dialogues.
Our method significantly outperforms several strong baseline methods and is very close to human's performance.
- Score: 53.380996227212165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel task, Multi-Document Driven Dialogue (MD3), in which an
agent can guess the target document that the user is interested in by leading a
dialogue. To benchmark progress, we introduce a new dataset of GuessMovie,
which contains 16,881 documents, each describing a movie, and associated 13,434
dialogues. Further, we propose the MD3 model. Keeping guessing the target
document in mind, it converses with the user conditioned on both document
engagement and user feedback. In order to incorporate large-scale external
documents into the dialogue, it pretrains a document representation which is
sensitive to attributes it talks about an object. Then it tracks dialogue state
by detecting evolvement of document belief and attribute belief, and finally
optimizes dialogue policy in principle of entropy decreasing and reward
increasing, which is expected to successfully guess the user's target in a
minimum number of turns. Experiments show that our method significantly
outperforms several strong baseline methods and is very close to human's
performance.
Related papers
- Multi-Document Grounded Multi-Turn Synthetic Dialog Generation [22.7158929225259]
We introduce a technique for multi-document grounded multi-turn synthetic dialog generation that incorporates three main ideas.
We control the overall dialog flow using taxonomy-driven user queries that are generated with Chain-of-Thought prompting.
We support the generation of multi-document grounded dialogs by mimicking real-world use of retrievers to update the grounding documents after every user-turn in the dialog.
arXiv Detail & Related papers (2024-09-17T19:02:39Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Dialog Inpainting: Turning Documents into Dialogs [12.131506050808207]
We produce two datasets totalling 19 million diverse information-seeking dialogs.
Human raters judge the answer adequacy and conversationality of WikiDialog to be as good or better than existing manually-collected datasets.
arXiv Detail & Related papers (2022-05-18T16:58:50Z) - DG2: Data Augmentation Through Document Grounded Dialogue Generation [41.81030088619399]
We propose an automatic data augmentation technique grounded on documents through a generative dialogue model.
When supplementing the original dataset, our method achieves significant improvement over traditional data augmentation methods.
arXiv Detail & Related papers (2021-12-15T18:50:14Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - A Compare Aggregate Transformer for Understanding Document-grounded
Dialogue [27.04964963480175]
We propose a Compare Aggregate Transformer (CAT) to jointly denoise the dialogue context and aggregate the document information for response generation.
Experimental results on the CMUDoG dataset show that the proposed CAT model outperforms the state-of-the-art approach and strong baselines.
arXiv Detail & Related papers (2020-10-01T03:44:44Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.