TODSum: Task-Oriented Dialogue Summarization with State Tracking
- URL: http://arxiv.org/abs/2110.12680v1
- Date: Mon, 25 Oct 2021 06:53:11 GMT
- Title: TODSum: Task-Oriented Dialogue Summarization with State Tracking
- Authors: Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing
Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng
- Abstract summary: We introduce a large-scale public Task-Oriented Dialogue Summarization dataset, TODSum.
Compared to existing work, TODSum suffers from severe scattered information issues and requires strict factual consistency.
We propose a state-aware structured dialogue summarization model to integrate dialogue state information and dialogue history.
- Score: 16.87549093925514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous dialogue summarization datasets mainly focus on open-domain chitchat
dialogues, while summarization datasets for the broadly used task-oriented
dialogue haven't been explored yet. Automatically summarizing such
task-oriented dialogues can help a business collect and review needs to improve
the service. Besides, previous datasets pay more attention to generate good
summaries with higher ROUGE scores, but they hardly understand the structured
information of dialogues and ignore the factuality of summaries. In this paper,
we introduce a large-scale public Task-Oriented Dialogue Summarization dataset,
TODSum, which aims to summarize the key points of the agent completing certain
tasks with the user. Compared to existing work, TODSum suffers from severe
scattered information issues and requires strict factual consistency, which
makes it hard to directly apply recent dialogue summarization models.
Therefore, we introduce additional dialogue state knowledge for TODSum to
enhance the faithfulness of generated summaries. We hope a better understanding
of conversational content helps summarization models generate concise and
coherent summaries. Meanwhile, we establish a comprehensive benchmark for
TODSum and propose a state-aware structured dialogue summarization model to
integrate dialogue state information and dialogue history. Exhaustive
experiments and qualitative analysis prove the effectiveness of dialogue
structure guidance. Finally, we discuss the current issues of TODSum and
potential development directions for future work.
Related papers
- Increasing faithfulness in human-human dialog summarization with Spoken Language Understanding tasks [0.0]
We propose an exploration of how incorporating task-related information can enhance the summarization process.
Results show that integrating models with task-related information improves summary accuracy, even with varying word error rates.
arXiv Detail & Related papers (2024-09-16T08:15:35Z) - SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation [55.82577086422923]
We provide a feasible definition of dialogue segmentation points with the help of document-grounded dialogues.
We release a large-scale supervised dataset called SuperDialseg, containing 9,478 dialogues.
We also provide a benchmark including 18 models across five categories for the dialogue segmentation task.
arXiv Detail & Related papers (2023-05-15T06:08:01Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Enhancing Semantic Understanding with Self-supervised Methods for
Abstractive Dialogue Summarization [4.226093500082746]
We introduce self-supervised methods to compensate shortcomings to train a dialogue summarization model.
Our principle is to detect incoherent information flows using pretext dialogue text to enhance BERT's ability to contextualize the dialogue text representations.
arXiv Detail & Related papers (2022-09-01T07:51:46Z) - HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data [87.67278915655712]
We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables.
The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions.
arXiv Detail & Related papers (2022-04-28T00:52:16Z) - Structure Extraction in Task-Oriented Dialogues with Slot Clustering [94.27806592467537]
In task-oriented dialogues, dialogue structure has often been considered as transition graphs among dialogue states.
We propose a simple yet effective approach for structure extraction in task-oriented dialogues.
arXiv Detail & Related papers (2022-02-28T20:18:12Z) - Dialogue Inspectional Summarization with Factual Inconsistency Awareness [34.97845384948336]
We investigate the factual inconsistency problem for Dialogue Inspectional Summarization (DIS) under non-pretraining and pretraining settings.
An innovative end-to-end dialogue summary generation framework is proposed with two auxiliary tasks.
Comprehensive experiments demonstrate that the proposed model can generate a more readable summary with accurate coverage of factual aspects.
arXiv Detail & Related papers (2021-11-05T06:26:22Z) - CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue
Summarization [44.21084429627218]
We introduce a novel Chinese dataset for Customer Service Dialogue Summarization (CSDS)
CSDS improves the abstractive summaries in two aspects: (1) In addition to the overall summary for the whole dialogue, role-oriented summaries are also provided to acquire different speakers' viewpoints.
We compare various summarization methods on CSDS, and experiment results show that existing methods are prone to generate redundant and incoherent summaries.
arXiv Detail & Related papers (2021-08-30T11:56:58Z) - Topic-Oriented Spoken Dialogue Summarization for Customer Service with
Saliency-Aware Topic Modeling [61.67321200994117]
In a customer service system, dialogue summarization can boost service efficiency by creating summaries for long spoken dialogues.
In this work, we focus on topic-oriented dialogue summarization, which generates highly abstractive summaries.
We propose a novel topic-augmented two-stage dialogue summarizer ( TDS) jointly with a saliency-aware neural topic model (SATM) for topic-oriented summarization of customer service dialogues.
arXiv Detail & Related papers (2020-12-14T07:50:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.