Controllable Neural Dialogue Summarization with Personal Named Entity
Planning
- URL: http://arxiv.org/abs/2109.13070v1
- Date: Mon, 27 Sep 2021 14:19:32 GMT
- Title: Controllable Neural Dialogue Summarization with Personal Named Entity
Planning
- Authors: Zhengyuan Liu, Nancy F. Chen
- Abstract summary: We propose a controllable neural generation framework that can guide dialogue summarization with personal named entity planning.
The conditional sequences are modulated to decide what types of information or what perspective to focus on when forming summaries.
- Score: 25.805553277418813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a controllable neural generation framework that can
flexibly guide dialogue summarization with personal named entity planning. The
conditional sequences are modulated to decide what types of information or what
perspective to focus on when forming summaries to tackle the under-constrained
problem in summarization tasks. This framework supports two types of use cases:
(1) Comprehensive Perspective, which is a general-purpose case with no
user-preference specified, considering summary points from all conversational
interlocutors and all mentioned persons; (2) Focus Perspective, positioning the
summary based on a user-specified personal named entity, which could be one of
the interlocutors or one of the persons mentioned in the conversation. During
training, we exploit occurrence planning of personal named entities and
coreference information to improve temporal coherence and to minimize
hallucination in neural generation. Experimental results show that our proposed
framework generates fluent and factually consistent summaries under various
planning controls using both objective metrics and human evaluations.
Related papers
- Personalized Topic Selection Model for Topic-Grounded Dialogue [24.74527189182273]
Current models tend to predict user-uninteresting and contextually irrelevant topics.
We propose a textbfPersonalized topic stextbfElection model for textbfTopic-grounded textbfDialogue, named textbfPETD.
Our proposed method can generate engaging and diverse responses, outperforming state-of-the-art baselines.
arXiv Detail & Related papers (2024-06-04T06:09:49Z) - SWING: Balancing Coverage and Faithfulness for Dialogue Summarization [67.76393867114923]
We propose to utilize natural language inference (NLI) models to improve coverage while avoiding factual inconsistencies.
We use NLI to compute fine-grained training signals to encourage the model to generate content in the reference summaries that have not been covered.
Experiments on the DialogSum and SAMSum datasets confirm the effectiveness of the proposed approach.
arXiv Detail & Related papers (2023-01-25T09:33:11Z) - Human-in-the-loop Abstractive Dialogue Summarization [61.4108097664697]
We propose to incorporate different levels of human feedback into the training process.
This will enable us to guide the models to capture the behaviors humans care about for summaries.
arXiv Detail & Related papers (2022-12-19T19:11:27Z) - Improving Personality Consistency in Conversation by Persona Extending [22.124187337032946]
We propose a novel retrieval-to-prediction paradigm consisting of two subcomponents, namely, Persona Retrieval Model (PRM) and Posterior-scored Transformer (PS-Transformer)
Our proposed model yields considerable improvements in both automatic metrics and human evaluations.
arXiv Detail & Related papers (2022-08-23T09:00:58Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.