MeetSum: Transforming Meeting Transcript Summarization using
Transformers!
- URL: http://arxiv.org/abs/2108.06310v1
- Date: Fri, 13 Aug 2021 16:34:09 GMT
- Title: MeetSum: Transforming Meeting Transcript Summarization using
Transformers!
- Authors: Nima Sadri, Bohan Zhang, Bihan Liu
- Abstract summary: We utilize a Transformer-based Pointer Generator Network to generate abstract summaries for meeting transcripts.
This model uses 2 LSTMs as an encoder and a decoder, a Pointer network which copies words from the inputted text, and a Generator network to produce out-of-vocabulary words.
We show that training the model on a news summary dataset and using zero-shot learning to test it on the meeting dataset proves to produce better results than training it on the AMI meeting dataset.
- Score: 2.1915057426589746
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Creating abstractive summaries from meeting transcripts has proven to be
challenging due to the limited amount of labeled data available for training
neural network models. Moreover, Transformer-based architectures have proven to
beat state-of-the-art models in summarizing news data. In this paper, we
utilize a Transformer-based Pointer Generator Network to generate abstract
summaries for meeting transcripts. This model uses 2 LSTMs as an encoder and a
decoder, a Pointer network which copies words from the inputted text, and a
Generator network to produce out-of-vocabulary words (hence making the summary
abstractive). Moreover, a coverage mechanism is used to avoid repetition of
words in the generated summary. First, we show that training the model on a
news summary dataset and using zero-shot learning to test it on the meeting
dataset proves to produce better results than training it on the AMI meeting
dataset. Second, we show that training this model first on out-of-domain data,
such as the CNN-Dailymail dataset, followed by a fine-tuning stage on the AMI
meeting dataset is able to improve the performance of the model significantly.
We test our model on a testing set from the AMI dataset and report the ROUGE-2
score of the generated summary to compare with previous literature. We also
report the Factual score of our summaries since it is a better benchmark for
abstractive summaries since the ROUGE-2 score is limited to measuring
word-overlaps. We show that our improved model is able to improve on previous
models by at least 5 ROUGE-2 scores, which is a substantial improvement. Also,
a qualitative analysis of the summaries generated by our model shows that these
summaries and human-readable and indeed capture most of the important
information from the transcripts.
Related papers
- Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - Annotating and Detecting Fine-grained Factual Errors for Dialogue
Summarization [34.85353544844499]
We present the first dataset with fine-grained factual error annotations named DIASUMFACT.
We define fine-grained factual error detection as a sentence-level multi-label classification problem.
We propose an unsupervised model ENDERANKER via candidate ranking using pretrained encoder-decoder models.
arXiv Detail & Related papers (2023-05-26T00:18:33Z) - Long Document Summarization with Top-down and Bottom-up Inference [113.29319668246407]
We propose a principled inference framework to improve summarization models on two aspects.
Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency.
We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets.
arXiv Detail & Related papers (2022-03-15T01:24:51Z) - HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text
Extractive Summarization [57.798070356553936]
HETFORMER is a Transformer-based pre-trained model with multi-granularity sparse attentions for extractive summarization.
Experiments on both single- and multi-document summarization tasks show that HETFORMER achieves state-of-the-art performance in Rouge F1.
arXiv Detail & Related papers (2021-10-12T22:42:31Z) - Improving Zero and Few-Shot Abstractive Summarization with Intermediate
Fine-tuning and Data Augmentation [101.26235068460551]
Models pretrained with self-supervised objectives on large text corpora achieve state-of-the-art performance on English text summarization tasks.
Models are typically fine-tuned on hundreds of thousands of data points, an infeasible requirement when applying summarization to new, niche domains.
We introduce a novel and generalizable method, called WikiTransfer, for fine-tuning pretrained models for summarization in an unsupervised, dataset-specific manner.
arXiv Detail & Related papers (2020-10-24T08:36:49Z) - Leverage Unlabeled Data for Abstractive Speech Summarization with
Self-Supervised Learning and Back-Summarization [6.465251961564605]
Supervised approaches for Neural Abstractive Summarization require large annotated corpora that are costly to build.
We present a French meeting summarization task where reports are predicted based on the automatic transcription of the meeting audio recordings.
We report large improvements compared to the previous baseline for both approaches on two evaluation sets.
arXiv Detail & Related papers (2020-07-30T08:22:47Z) - Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward [42.925345819778656]
We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
arXiv Detail & Related papers (2020-05-03T18:23:06Z) - A Hierarchical Network for Abstractive Meeting Summarization with
Cross-Domain Pretraining [52.11221075687124]
We propose a novel abstractive summary network that adapts to the meeting scenario.
We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers.
Our model outperforms previous approaches in both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-04-04T21:00:41Z) - Pre-training for Abstractive Document Summarization by Reinstating
Source Text [105.77348528847337]
This paper presents three pre-training objectives which allow us to pre-train a Seq2Seq based abstractive summarization model on unlabeled text.
Experiments on two benchmark summarization datasets show that all three objectives can improve performance upon baselines.
arXiv Detail & Related papers (2020-04-04T05:06:26Z) - Abstractive Text Summarization based on Language Model Conditioning and
Locality Modeling [4.525267347429154]
We train a Transformer-based neural model on the BERT language model.
In addition, we propose a new method of BERT-windowing, which allows chunk-wise processing of texts longer than the BERT window size.
The results of our models are compared to a baseline and the state-of-the-art models on the CNN/Daily Mail dataset.
arXiv Detail & Related papers (2020-03-29T14:00:17Z) - Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis [1.148539813252112]
We explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods.
We show that tuning state of the art model trained on newspaper data could boost performance on student reflection data.
We propose a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores.
arXiv Detail & Related papers (2020-02-09T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.