Multi-Domain Dialogue Acts and Response Co-Generation
- URL: http://arxiv.org/abs/2004.12363v1
- Date: Sun, 26 Apr 2020 12:21:17 GMT
- Title: Multi-Domain Dialogue Acts and Response Co-Generation
- Authors: Kai Wang and Junfeng Tian and Rui Wang and Xiaojun Quan and Jianxing
Yu
- Abstract summary: We propose a neural co-generation model that generates dialogue acts and responses concurrently.
Our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.
- Score: 34.27525685962274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating fluent and informative responses is of critical importance for
task-oriented dialogue systems. Existing pipeline approaches generally predict
multiple dialogue acts first and use them to assist response generation. There
are at least two shortcomings with such approaches. First, the inherent
structures of multi-domain dialogue acts are neglected. Second, the semantic
associations between acts and responses are not taken into account for response
generation. To address these issues, we propose a neural co-generation model
that generates dialogue acts and responses concurrently. Unlike those pipeline
approaches, our act generation module preserves the semantic structures of
multi-domain dialogue acts and our response generation module dynamically
attends to different acts as needed. We train the two modules jointly using an
uncertainty loss to adjust their task weights adaptively. Extensive experiments
are conducted on the large-scale MultiWOZ dataset and the results show that our
model achieves very favorable improvement over several state-of-the-art models
in both automatic and human evaluations.
Related papers
- JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning [53.83063435640911]
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
We introduce a novel framework, JoTR, to generate flexible dialogue actions.
Unlike traditional methods, JoTR formulates a word-level policy that allows for a more dynamic and adaptable dialogue action generation.
arXiv Detail & Related papers (2023-09-01T03:19:53Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - EM Pre-training for Multi-party Dialogue Response Generation [86.25289241604199]
In multi-party dialogues, the addressee of a response utterance should be specified before it is generated.
We propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels.
arXiv Detail & Related papers (2023-05-21T09:22:41Z) - DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for
Multiple Intent Detection [27.787807111516706]
Instead of training a dedicated multi-intent detection model, we propose DialogUSR.
DialogUSR splits multi-intent user query into several single-intent sub-queries.
It then recovers all the coreferred and omitted information in the sub-queries.
arXiv Detail & Related papers (2022-10-20T13:56:35Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - Transferable Dialogue Systems and User Simulators [17.106518400787156]
One of the difficulties in training dialogue systems is the lack of training data.
We explore the possibility of creating dialogue data through the interaction between a dialogue system and a user simulator.
We develop a modelling framework that can incorporate new dialogue scenarios through self-play between the two agents.
arXiv Detail & Related papers (2021-07-25T22:59:09Z) - Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory [13.469140432108151]
We propose a retrieve-and-memorize framework to enhance the learning of system actions.
We use a memory-augmented multi-decoder network to generate the system actions conditioned on the candidate actions.
Our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.
arXiv Detail & Related papers (2021-06-04T07:53:56Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Controlling Dialogue Generation with Semantic Exemplars [55.460082747572734]
We present an Exemplar-based Dialogue Generation model, EDGE, that uses the semantic frames present in exemplar responses to guide generation.
We show that controlling dialogue generation based on the semantic frames of exemplars, rather than words in the exemplar itself, improves the coherence of generated responses.
arXiv Detail & Related papers (2020-08-20T17:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.