He Said, She Said: Style Transfer for Shifting the Perspective of
Dialogues
- URL: http://arxiv.org/abs/2210.15462v1
- Date: Thu, 27 Oct 2022 14:16:07 GMT
- Title: He Said, She Said: Style Transfer for Shifting the Perspective of
Dialogues
- Authors: Amanda Bertsch, Graham Neubig, Matthew R. Gormley
- Abstract summary: We define a new style transfer task: perspective shift, which reframes a dialogue from informal first person to a formal third person rephrasing of the text.
As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models.
- Score: 75.58367095888914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we define a new style transfer task: perspective shift, which
reframes a dialogue from informal first person to a formal third person
rephrasing of the text. This task requires challenging coreference resolution,
emotion attribution, and interpretation of informal text. We explore several
baseline approaches and discuss further directions on this task when applied to
short dialogues. As a sample application, we demonstrate that applying
perspective shifting to a dialogue summarization dataset (SAMSum) substantially
improves the zero-shot performance of extractive news summarization models on
this data. Additionally, supervised extractive models perform better when
trained on perspective shifted data than on the original dialogues. We release
our code publicly.
Related papers
- Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer in
Prompt Tuning [47.336815771549524]
Skeleton-Assisted Prompt Transfer improves prompt transfer from dialogue state tracking to dialogue summarization.
We propose a novel approach with perturbation-based probes requiring neither annotation effort nor domain knowledge.
In-depth analyses demonstrate the effectiveness of our method in facilitating cross-task knowledge transfer in few-shot dialogue summarization.
arXiv Detail & Related papers (2023-05-20T03:32:48Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Post-Training Dialogue Summarization using Pseudo-Paraphrasing [12.083992819138716]
We propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives.
Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization.
arXiv Detail & Related papers (2022-04-28T13:42:19Z) - Precognition in Task-oriented Dialogue Understanding: Posterior
Regularization by Future Context [8.59600111891194]
We propose to jointly model historical and future information through the posterior regularization method.
We optimize the KL distance between these to regularize our model during training.
Experiments on two dialogue datasets validate the effectiveness of our proposed method.
arXiv Detail & Related papers (2022-03-07T09:58:50Z) - Modeling Coreference Relations in Visual Dialog [18.926582410644375]
The occurrences of coreference relations in the dialog makes it a more challenging task than visual question-answering.
We propose two soft constraints that can improve the model's ability of resolving coreferences in dialog in an unsupervised way.
arXiv Detail & Related papers (2022-03-06T15:22:24Z) - Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization [41.75442239197745]
This work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives.
Experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines.
arXiv Detail & Related papers (2021-09-10T17:03:25Z) - Dialogue Summarization with Supporting Utterance Flow Modeling and Fact
Regularization [58.965859508695225]
We propose an end-to-end neural model for dialogue summarization with two novel modules.
The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.
The fact regularization encourages the generated summary to be factually consistent with the ground-truth summary during model training.
arXiv Detail & Related papers (2021-08-03T03:09:25Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.