Coreference-Aware Dialogue Summarization
- URL: http://arxiv.org/abs/2106.08556v1
- Date: Wed, 16 Jun 2021 05:18:50 GMT
- Title: Coreference-Aware Dialogue Summarization
- Authors: Zhengyuan Liu, Ke Shi, Nancy F. Chen
- Abstract summary: We investigate approaches to explicitly incorporate coreference information in neural abstractive dialogue summarization models.
Experimental results show that the proposed approaches achieve state-of-the-art performance.
Evaluation results on factual correctness suggest such coreference-aware models are better at tracing the information flow among interlocutors.
- Score: 24.986030179701405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Summarizing conversations via neural approaches has been gaining research
traction lately, yet it is still challenging to obtain practical solutions.
Examples of such challenges include unstructured information exchange in
dialogues, informal interactions between speakers, and dynamic role changes of
speakers as the dialogue evolves. Many of such challenges result in complex
coreference links. Therefore, in this work, we investigate different approaches
to explicitly incorporate coreference information in neural abstractive
dialogue summarization models to tackle the aforementioned challenges.
Experimental results show that the proposed approaches achieve state-of-the-art
performance, implying it is useful to utilize coreference information in
dialogue summarization. Evaluation results on factual correctness suggest such
coreference-aware models are better at tracing the information flow among
interlocutors and associating accurate status/actions with the corresponding
interlocutors and person mentions.
Related papers
- Evaluating Robustness of Dialogue Summarization Models in the Presence
of Naturally Occurring Variations [13.749495524988774]
We systematically investigate the impact of real-life variations on state-of-the-art dialogue summarization models.
We introduce two types of perturbations: utterance-level perturbations that modify individual utterances with errors and language variations, and dialogue-level perturbations that add non-informative exchanges.
We find that both fine-tuned and instruction-tuned models are affected by input variations, with the latter being more susceptible.
arXiv Detail & Related papers (2023-11-15T05:11:43Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Speaker-Oriented Latent Structures for Dialogue-Based Relation
Extraction [10.381257436462116]
We introduce SOLS, a novel model which can explicitly induce speaker-oriented latent structures for better DiaRE.
Specifically, we learn latent structures to capture the relationships among tokens beyond the utterance boundaries.
During the learning process, our speaker-specific regularization method progressively highlights speaker-related key clues and erases the irrelevant ones.
arXiv Detail & Related papers (2021-09-11T04:24:51Z) - Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization [41.75442239197745]
This work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives.
Experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines.
arXiv Detail & Related papers (2021-09-10T17:03:25Z) - Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance
for Multi-party Dialogue Reading Comprehension [46.69961067676279]
Multi-party dialogue machine reading comprehension (MRC) brings tremendous challenge since it involves multiple speakers at one dialogue.
Previous models focus on how to incorporate speaker information flows using complex graph-based modules.
In this paper, we design two labour-free self- and pseudo-self-supervised prediction tasks on speaker and key-utterance to implicitly model the speaker information flows.
arXiv Detail & Related papers (2021-09-08T16:51:41Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Fact-based Dialogue Generation with Convergent and Divergent Decoding [2.28438857884398]
This paper proposes an end-to-end fact-based dialogue system augmented with the ability of convergent and divergent thinking.
Our model incorporates a novel convergent and divergent decoding that can generate informative and diverse responses.
arXiv Detail & Related papers (2020-05-06T23:49:35Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.