Long Dialog Summarization: An Analysis
- URL: http://arxiv.org/abs/2402.16986v1
- Date: Mon, 26 Feb 2024 19:35:45 GMT
- Title: Long Dialog Summarization: An Analysis
- Authors: Ankan Mullick, Ayan Kumar Bhowmick, Raghav R, Ravi Kokku, Prasenjit
Dey, Pawan Goyal, Niloy Ganguly
- Abstract summary: This work emphasizes the significance of creating coherent and contextually rich summaries for effective communication in various applications.
We explore current state-of-the-art approaches for long dialog summarization in different domains and benchmark metrics based evaluations show that one single model does not perform well across various areas for distinct summarization tasks.
- Score: 28.223798877781054
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Dialog summarization has become increasingly important in managing and
comprehending large-scale conversations across various domains. This task
presents unique challenges in capturing the key points, context, and nuances of
multi-turn long conversations for summarization. It is worth noting that the
summarization techniques may vary based on specific requirements such as in a
shopping-chatbot scenario, the dialog summary helps to learn user preferences,
whereas in the case of a customer call center, the summary may involve the
problem attributes that a user specified, and the final resolution provided.
This work emphasizes the significance of creating coherent and contextually
rich summaries for effective communication in various applications. We explore
current state-of-the-art approaches for long dialog summarization in different
domains and benchmark metrics based evaluations show that one single model does
not perform well across various areas for distinct summarization tasks.
Related papers
- Increasing faithfulness in human-human dialog summarization with Spoken Language Understanding tasks [0.0]
We propose an exploration of how incorporating task-related information can enhance the summarization process.
Results show that integrating models with task-related information improves summary accuracy, even with varying word error rates.
arXiv Detail & Related papers (2024-09-16T08:15:35Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems [29.394466123216258]
This study provides a comprehensive overview of the primary characteristics of a dialogue agent, their corresponding open-domain datasets, and the methods used to benchmark these datasets.
We propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
arXiv Detail & Related papers (2023-07-14T10:05:47Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - A Focused Study on Sequence Length for Dialogue Summarization [68.73335643440957]
We analyze the length differences between existing models' outputs and the corresponding human references.
We identify salient features for summary length prediction by comparing different model settings.
Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated.
arXiv Detail & Related papers (2022-09-24T02:49:48Z) - TODSum: Task-Oriented Dialogue Summarization with State Tracking [16.87549093925514]
We introduce a large-scale public Task-Oriented Dialogue Summarization dataset, TODSum.
Compared to existing work, TODSum suffers from severe scattered information issues and requires strict factual consistency.
We propose a state-aware structured dialogue summarization model to integrate dialogue state information and dialogue history.
arXiv Detail & Related papers (2021-10-25T06:53:11Z) - Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization [41.75442239197745]
This work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives.
Experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines.
arXiv Detail & Related papers (2021-09-10T17:03:25Z) - CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue
Summarization [44.21084429627218]
We introduce a novel Chinese dataset for Customer Service Dialogue Summarization (CSDS)
CSDS improves the abstractive summaries in two aspects: (1) In addition to the overall summary for the whole dialogue, role-oriented summaries are also provided to acquire different speakers' viewpoints.
We compare various summarization methods on CSDS, and experiment results show that existing methods are prone to generate redundant and incoherent summaries.
arXiv Detail & Related papers (2021-08-30T11:56:58Z) - Topic-Oriented Spoken Dialogue Summarization for Customer Service with
Saliency-Aware Topic Modeling [61.67321200994117]
In a customer service system, dialogue summarization can boost service efficiency by creating summaries for long spoken dialogues.
In this work, we focus on topic-oriented dialogue summarization, which generates highly abstractive summaries.
We propose a novel topic-augmented two-stage dialogue summarizer ( TDS) jointly with a saliency-aware neural topic model (SATM) for topic-oriented summarization of customer service dialogues.
arXiv Detail & Related papers (2020-12-14T07:50:25Z) - Multi-View Sequence-to-Sequence Models with Conversational Structure for
Abstractive Dialogue Summarization [72.54873655114844]
Text summarization is one of the most challenging and interesting problems in NLP.
This work proposes a multi-view sequence-to-sequence model by first extracting conversational structures of unstructured daily chats from different views to represent conversations.
Experiments on a large-scale dialogue summarization corpus demonstrated that our methods significantly outperformed previous state-of-the-art models via both automatic evaluations and human judgment.
arXiv Detail & Related papers (2020-10-04T20:12:44Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.