Dynamic Sliding Window for Meeting Summarization
- URL: http://arxiv.org/abs/2108.13629v1
- Date: Tue, 31 Aug 2021 05:39:48 GMT
- Title: Dynamic Sliding Window for Meeting Summarization
- Authors: Zhengyuan Liu, Nancy F. Chen
- Abstract summary: We analyze the linguistic characteristics of meeting transcripts on a representative corpus, and find that the sentences comprising the summary correlate with the meeting agenda.
We propose a dynamic sliding window strategy for meeting summarization.
- Score: 25.805553277418813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently abstractive spoken language summarization raises emerging research
interest, and neural sequence-to-sequence approaches have brought significant
performance improvement. However, summarizing long meeting transcripts remains
challenging. Due to the large length of source contents and targeted summaries,
neural models are prone to be distracted on the context, and produce summaries
with degraded quality. Moreover, pre-trained language models with input length
limitations cannot be readily applied to long sequences. In this work, we first
analyze the linguistic characteristics of meeting transcripts on a
representative corpus, and find that the sentences comprising the summary
correlate with the meeting agenda. Based on this observation, we propose a
dynamic sliding window strategy for meeting summarization. Experimental results
show that performance benefit from the proposed method, and outputs obtain
higher factual consistency than the base model.
Related papers
- Leveraging Discourse Structure for Extractive Meeting Summarization [26.76383031532945]
We introduce an extractive summarization system for meetings that leverages discourse structure to better identify salient information.
We train a GNN-based node classification model to select the most important utterances, which are then combined to create an extractive summary.
Experimental results on AMI and ICSI demonstrate that our approach surpasses existing text-based and graph-based extractive summarization systems.
arXiv Detail & Related papers (2024-05-17T19:06:20Z) - Using Contextual Information for Sentence-level Morpheme Segmentation [0.0]
We redefine morpheme segmentation as a sequence-to-sequence problem, treating the entire sentence as input rather than isolating individual words.
Our findings reveal that the multilingual model consistently exhibits superior performance compared to monolingual counterparts.
arXiv Detail & Related papers (2024-03-15T20:12:32Z) - FENICE: Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction [85.26780391682894]
We propose Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction (FENICE)
FENICE leverages an NLI-based alignment between information in the source document and a set of atomic facts, referred to as claims, extracted from the summary.
Our metric sets a new state of the art on AGGREFACT, the de-facto benchmark for factuality evaluation.
arXiv Detail & Related papers (2024-03-04T17:57:18Z) - Self-Convinced Prompting: Few-Shot Question Answering with Repeated
Introspection [13.608076739368949]
We introduce a novel framework that harnesses the potential of large-scale pre-trained language models.
Our framework processes the output of a typical few-shot chain-of-thought prompt, assesses the correctness of the response, scrutinizes the answer, and ultimately produces a new solution.
arXiv Detail & Related papers (2023-10-08T06:36:26Z) - Generating Multiple-Length Summaries via Reinforcement Learning for
Unsupervised Sentence Summarization [44.835811239393244]
Sentence summarization shortens given texts while maintaining core contents of the texts.
Unsupervised approaches have been studied to summarize texts without human-written summaries.
We devise an abstractive model based on reinforcement learning without ground-truth summaries.
arXiv Detail & Related papers (2022-12-21T08:34:28Z) - A Focused Study on Sequence Length for Dialogue Summarization [68.73335643440957]
We analyze the length differences between existing models' outputs and the corresponding human references.
We identify salient features for summary length prediction by comparing different model settings.
Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated.
arXiv Detail & Related papers (2022-09-24T02:49:48Z) - SNaC: Coherence Error Detection for Narrative Summarization [73.48220043216087]
We introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries.
We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie screenplay summaries.
Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowd annotators.
arXiv Detail & Related papers (2022-05-19T16:01:47Z) - Dialogue Summarization with Supporting Utterance Flow Modeling and Fact
Regularization [58.965859508695225]
We propose an end-to-end neural model for dialogue summarization with two novel modules.
The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.
The fact regularization encourages the generated summary to be factually consistent with the ground-truth summary during model training.
arXiv Detail & Related papers (2021-08-03T03:09:25Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - A Hierarchical Network for Abstractive Meeting Summarization with
Cross-Domain Pretraining [52.11221075687124]
We propose a novel abstractive summary network that adapts to the meeting scenario.
We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers.
Our model outperforms previous approaches in both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-04-04T21:00:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.