Meeting Summarization with Pre-training and Clustering Methods
- URL: http://arxiv.org/abs/2111.08210v1
- Date: Tue, 16 Nov 2021 03:14:40 GMT
- Title: Meeting Summarization with Pre-training and Clustering Methods
- Authors: Andras Huebner, Wei Ji, Xiang Xiao
- Abstract summary: HMNetcitehmnet is a hierarchical network that employs both a word-level transformer and a turn-level transformer, as the baseline.
We extend the locate-then-summarize approach of QMSumciteqmsum with an intermediate clustering step.
We compare the performance of our baseline models with BART, a state-of-the-art language model that is effective for summarization.
- Score: 6.47783315109491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic meeting summarization is becoming increasingly popular these days.
The ability to automatically summarize meetings and to extract key information
could greatly increase the efficiency of our work and life. In this paper, we
experiment with different approaches to improve the performance of query-based
meeting summarization. We started with HMNet\cite{hmnet}, a hierarchical
network that employs both a word-level transformer and a turn-level
transformer, as the baseline. We explore the effectiveness of pre-training the
model with a large news-summarization dataset. We investigate adding the
embeddings of queries as a part of the input vectors for query-based
summarization. Furthermore, we experiment with extending the
locate-then-summarize approach of QMSum\cite{qmsum} with an intermediate
clustering step. Lastly, we compare the performance of our baseline models with
BART, a state-of-the-art language model that is effective for summarization. We
achieved improved performance by adding query embeddings to the input of the
model, by using BART as an alternative language model, and by using clustering
methods to extract key information at utterance level before feeding the text
into summarization models.
Related papers
- Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - JADS: A Framework for Self-supervised Joint Aspect Discovery and Summarization [3.992091862806936]
Our solution integrates topic discovery and summarization into a single step.
Given text data, our Joint Aspect Discovery and Summarization algorithm (JADS) discovers aspects from the input.
Our proposed method achieves higher semantic alignment with ground truth and is factual.
arXiv Detail & Related papers (2024-05-28T23:01:57Z) - Information-Theoretic Distillation for Reference-less Summarization [67.51150817011617]
We present a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization.
We start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization.
We arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT.
arXiv Detail & Related papers (2024-03-20T17:42:08Z) - Exploring Category Structure with Contextual Language Models and Lexical
Semantic Networks [0.0]
We test a wider array of methods for probing CLMs for predicting typicality scores.
Our experiments, using BERT, show the importance of using the right type of CLM probes.
Results highlight the importance of polysemy in this task.
arXiv Detail & Related papers (2023-02-14T09:57:23Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Coarse-to-Fine Memory Matching for Joint Retrieval and Classification [0.7081604594416339]
We present a novel end-to-end language model for joint retrieval and classification.
We evaluate it on the standard blind test set of the FEVER fact verification dataset.
We extend exemplar auditing to this setting for analyzing and constraining the model.
arXiv Detail & Related papers (2020-11-29T05:06:03Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - Semantically Driven Sentence Fusion: Modeling and Evaluation [27.599227950466442]
Sentence fusion is the task of joining related sentences into coherent text.
Current training and evaluation schemes for this task are based on single reference ground-truths.
We show that this hinders models from robustly capturing the semantic relationship between input sentences.
arXiv Detail & Related papers (2020-10-06T10:06:01Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.