UniDU: Towards A Unified Generative Dialogue Understanding Framework
- URL: http://arxiv.org/abs/2204.04637v1
- Date: Sun, 10 Apr 2022 09:32:34 GMT
- Title: UniDU: Towards A Unified Generative Dialogue Understanding Framework
- Authors: Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang
Lou, Kai Yu
- Abstract summary: We investigate a unified generative dialogue understanding framework, namely UniDU, to achieve information exchange among DU tasks.
We conduct experiments on ten dialogue understanding datasets, which span five fundamental tasks.
The proposed UniDU framework outperforms task-specific well-designed methods on all 5 tasks.
- Score: 62.8474841241855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of pre-trained language models, remarkable success has
been witnessed in dialogue understanding (DU) direction. However, the current
DU approaches just employ an individual model for each DU task, independently,
without considering the shared knowledge across different DU tasks. In this
paper, we investigate a unified generative dialogue understanding framework,
namely UniDU, to achieve information exchange among DU tasks. Specifically, we
reformulate the DU tasks into unified generative paradigm. In addition, to
consider different training data for each task, we further introduce
model-agnostic training strategy to optimize unified model in a balanced
manner. We conduct the experiments on ten dialogue understanding datasets,
which span five fundamental tasks: dialogue summary, dialogue completion, slot
filling, intent detection and dialogue state tracking. The proposed UniDU
framework outperforms task-specific well-designed methods on all 5 tasks. We
further conduct comprehensive analysis experiments to study the effect factors.
The experimental results also show that the proposed method obtains promising
performance on unseen dialogue domain.
Related papers
- Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems [29.394466123216258]
This study provides a comprehensive overview of the primary characteristics of a dialogue agent, their corresponding open-domain datasets, and the methods used to benchmark these datasets.
We propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
arXiv Detail & Related papers (2023-07-14T10:05:47Z) - Collaborative Reasoning on Multi-Modal Semantic Graphs for
Video-Grounded Dialogue Generation [53.87485260058957]
We study video-grounded dialogue generation, where a response is generated based on the dialogue context and the associated video.
The primary challenges of this task lie in (1) the difficulty of integrating video data into pre-trained language models (PLMs)
We propose a multi-agent reinforcement learning method to collaboratively perform reasoning on different modalities.
arXiv Detail & Related papers (2022-10-22T14:45:29Z) - Improving Zero and Few-shot Generalization in Dialogue through
Instruction Tuning [27.92734269206744]
InstructDial is an instruction tuning framework for dialogue.
It consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.
Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting.
arXiv Detail & Related papers (2022-05-25T11:37:06Z) - DialogZoo: Large-Scale Dialog-Oriented Task Learning [52.18193690394549]
We aim to build a unified foundation model which can solve massive diverse dialogue tasks.
To achieve this goal, we first collect a large-scale well-labeled dialogue dataset from 73 publicly available datasets.
arXiv Detail & Related papers (2022-05-25T11:17:16Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System [26.837972034630003]
PPTOD is a unified plug-and-play model for task-oriented dialogue.
We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification.
arXiv Detail & Related papers (2021-09-29T22:02:18Z) - DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented
Dialogue [17.729711165119472]
We introduce DialoGLUE (Dialogue Language Understanding Evaluation), a public benchmark consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks.
We release several strong baseline models, demonstrating performance improvements over a vanilla BERT architecture and state-of-the-art results on 5 out of 7 tasks.
Through the DialoGLUE benchmark, the baseline methods, and our evaluation scripts, we hope to facilitate progress towards the goal of developing more general task-oriented dialogue models.
arXiv Detail & Related papers (2020-09-28T18:36:23Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.