Attention over Parameters for Dialogue Systems
- URL: http://arxiv.org/abs/2001.01871v2
- Date: Wed, 4 Mar 2020 03:02:14 GMT
- Title: Attention over Parameters for Dialogue Systems
- Authors: Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, Pascale
Fung
- Abstract summary: We learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP)
The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ, In-Car Assistant, and Persona-Chat.
- Score: 69.48852519856331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue systems require a great deal of different but complementary
expertise to assist, inform, and entertain humans. For example, different
domains (e.g., restaurant reservation, train ticket booking) of goal-oriented
dialogue systems can be viewed as different skills, and so does ordinary
chatting abilities of chit-chat dialogue systems. In this paper, we propose to
learn a dialogue system that independently parameterizes different dialogue
skills, and learns to select and combine each of them through Attention over
Parameters (AoP). The experimental results show that this approach achieves
competitive performance on a combined dataset of MultiWOZ, In-Car Assistant,
and Persona-Chat. Finally, we demonstrate that each dialogue skill is
effectively learned and can be combined with other skills to produce selective
responses.
Related papers
- An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems [18.829793635104608]
We introduce a general framework for ASR in dialog systems.
We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems.
arXiv Detail & Related papers (2024-09-16T17:59:50Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - A Benchmark for Understanding and Generating Dialogue between Characters
in Stories [75.29466820496913]
We present the first study to explore whether machines can understand and generate dialogue in stories.
We propose two new tasks including Masked Dialogue Generation and Dialogue Speaker Recognition.
We show the difficulty of the proposed tasks by testing existing models with automatic and manual evaluation on DialStory.
arXiv Detail & Related papers (2022-09-18T10:19:04Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Task-oriented Dialogue Systems: performance vs. quality-optima, a review [0.0]
State-of-the-art task-oriented dialogue systems are not yet reaching their full potential.
Other conversational quality attributes that may point to the success, or otherwise, of the dialogue, may be ignored.
This paper explores the literature on evaluative frameworks of dialogue systems and the role of conversational quality attributes in dialogue systems.
arXiv Detail & Related papers (2021-12-21T13:16:24Z) - A Review of Dialogue Systems: From Trained Monkeys to Stochastic Parrots [0.0]
We aim to deploy artificial intelligence to build automated dialogue agents that can converse with humans.
We present a broad overview of methods developed to build dialogue systems over the years.
arXiv Detail & Related papers (2021-11-02T08:07:55Z) - UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented
Dialogues [59.499965460525694]
We propose a unified dialogue system (UniDS) with the two aforementioned skills.
We design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues.
We train UniDS with mixed dialogue data from a pretrained chit-chat dialogue model.
arXiv Detail & Related papers (2021-10-15T11:56:47Z) - Dialog as a Vehicle for Lifelong Learning [24.420113907842147]
We present the problem of designing dialog systems that enable lifelong learning.
We include examples of prior work in this direction, and discuss challenges that remain to be addressed.
arXiv Detail & Related papers (2020-06-26T03:08:33Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.