STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension
- URL: http://arxiv.org/abs/2212.12652v1
- Date: Sat, 24 Dec 2022 04:39:54 GMT
- Title: STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension
- Authors: Borui Wang, Chengcheng Feng, Arjun Nair, Madelyn Mao, Jai Desai, Asli
Celikyilmaz, Haoran Li, Yashar Mehdad, Dragomir Radev
- Abstract summary: Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing.
We propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization.
We show that our STRUDEL dialogue comprehension model can significantly improve the dialogue comprehension performance of transformer encoder language models.
- Score: 42.57581945778631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstractive dialogue summarization has long been viewed as an important
standalone task in natural language processing, but no previous work has
explored the possibility of whether abstractive dialogue summarization can also
be used as a means to boost an NLP system's performance on other important
dialogue comprehension tasks. In this paper, we propose a novel type of
dialogue summarization task - STRUctured DiaLoguE Summarization - that can help
pre-trained language models to better understand dialogues and improve their
performance on important dialogue comprehension tasks. We further collect human
annotations of STRUDEL summaries over 400 dialogues and introduce a new STRUDEL
dialogue comprehension modeling framework that integrates STRUDEL into a
graph-neural-network-based dialogue reasoning module over transformer encoder
language models to improve their dialogue comprehension abilities. In our
empirical experiments on two important downstream dialogue comprehension tasks
- dialogue question answering and dialogue response prediction - we show that
our STRUDEL dialogue comprehension model can significantly improve the dialogue
comprehension performance of transformer encoder language models.
Related papers
- Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State Tracking [5.816391291790977]
Dialogue state tracking (DST) aims to track human-machine interactions and generate state representations for managing the dialogue.
Recent advances in machine reading comprehension predict both categorical and non-categorical types of slots for dialogue state tracking.
We formulate and incorporate dialogue acts, and leverage recent advances in machine reading comprehension to predict both categorical and non-categorical types of slots for dialogue state tracking.
arXiv Detail & Related papers (2022-08-04T05:18:30Z) - Post-Training Dialogue Summarization using Pseudo-Paraphrasing [12.083992819138716]
We propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives.
Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization.
arXiv Detail & Related papers (2022-04-28T13:42:19Z) - DSBERT:Unsupervised Dialogue Structure learning with BERT [4.171523157658394]
We propose a Bert-based unsupervised dialogue structure learning algorithm DSBERT (Dialogue Structure BERT)
Different from the previous SOTA models VRNN and SVRNN, we combine BERT and AutoEncoder, which can effectively combine context information.
Experimental results show that DSBERT can generate a dialogue structure closer to the real structure, can distinguish sentences with different semantics and map them to different hidden states.
arXiv Detail & Related papers (2021-11-09T03:31:18Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence.
This paper reviews the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios.
arXiv Detail & Related papers (2021-10-11T03:52:37Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.