Investigating Consistency in Query-Based Meeting Summarization: A
Comparative Study of Different Embedding Methods
- URL: http://arxiv.org/abs/2402.06907v1
- Date: Sat, 10 Feb 2024 08:25:30 GMT
- Title: Investigating Consistency in Query-Based Meeting Summarization: A
Comparative Study of Different Embedding Methods
- Authors: Chen Jia-Chen (Oscar), Guillem Senabre, Allane Caron
- Abstract summary: Text Summarization is one of famous applications in Natural Language Processing (NLP) field.
It aims to automatically generate summary with important information based on a given context.
In this paper, we are inspired by "QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization" proposed by Microsoft.
We also propose our Locater model designed to extract relevant spans based on given transcript and query, which are then summarized by Summarizer model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With more and more advanced data analysis techniques emerging, people will
expect these techniques to be applied in more complex tasks and solve problems
in our daily lives. Text Summarization is one of famous applications in Natural
Language Processing (NLP) field. It aims to automatically generate summary with
important information based on a given context, which is important when you
have to deal with piles of documents. Summarization techniques can help capture
key points in a short time and bring convenience in works. One of applicable
situation is meeting summarization, especially for important meeting that tend
to be long, complicated, multi-topic and multi-person. Therefore, when people
want to review specific content from a meeting, it will be hard and
time-consuming to find the related spans in the meeting transcript. However,
most of previous works focus on doing summarization for newsletters, scientific
articles...etc, which have a clear document structure and an official format.
For the documents with complex structure like transcripts, we think those works
are not quite suitable for meeting summarization. Besides, the consistency of
summary is another issue common to be discussed in NLP field. To conquer
challenges of meeting summarization, we are inspired by "QMSum: A New Benchmark
for Query-based Multi-domain Meeting Summarization" proposed by Microsoft and
we also propose our Locater model designed to extract relevant spans based on
given transcript and query, which are then summarized by Summarizer model.
Furthermore, we perform a comparative study by applying different word
embedding techniques to improve summary consistency.
Related papers
- Beyond Relevant Documents: A Knowledge-Intensive Approach for Query-Focused Summarization using Large Language Models [27.90653125902507]
We propose a knowledge-intensive approach that reframes query-focused summarization as a knowledge-intensive task setup.
The retrieval module efficiently retrieves potentially relevant documents from a large-scale knowledge corpus.
The summarization controller seamlessly integrates a powerful large language model (LLM)-based summarizer with a carefully tailored prompt.
arXiv Detail & Related papers (2024-08-19T18:54:20Z) - QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs [63.98556480088152]
Table summarization is a crucial task aimed at condensing information into concise and comprehensible textual summaries.
We propose a novel method to address these limitations by introducing query-focused multi-table summarization.
Our approach, which comprises a table serialization module, a summarization controller, and a large language model, generates query-dependent table summaries tailored to users' information needs.
arXiv Detail & Related papers (2024-05-08T15:05:55Z) - Aspect-based Meeting Transcript Summarization: A Two-Stage Approach with
Weak Supervision on Sentence Classification [91.13086984529706]
Aspect-based meeting transcript summarization aims to produce multiple summaries.
Traditional summarization methods produce one summary mixing information of all aspects.
We propose a two-stage method for aspect-based meeting transcript summarization.
arXiv Detail & Related papers (2023-11-07T19:06:31Z) - QuOTeS: Query-Oriented Technical Summarization [0.2936007114555107]
We propose QuOTeS, an interactive system designed to retrieve sentences related to a summary of the research from a collection of potential references.
QuOTeS integrates techniques from Query-Focused Extractive Summarization and High-Recall Information Retrieval to provide Interactive Query-Focused Summarization of scientific documents.
The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete.
arXiv Detail & Related papers (2023-06-20T18:43:24Z) - Aspect-Oriented Summarization through Query-Focused Extraction [23.62412515574206]
Real users' needs often fall more closely into aspects, broad topics in a dataset the user is interested in rather than specific queries.
We benchmark extractive query-focused training schemes, and propose a contrastive augmentation approach to train the model.
We evaluate on two aspect-oriented datasets and find this approach yields focused summaries, better than those from a generic summarization system.
arXiv Detail & Related papers (2021-10-15T18:06:21Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - QMSum: A New Benchmark for Query-based Multi-domain Meeting
Summarization [45.83402681068943]
QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple domains.
We investigate a locate-then-summarize method and evaluate a set of strong summarization baselines on the task.
arXiv Detail & Related papers (2021-04-13T05:00:35Z) - From Standard Summarization to New Tasks and Beyond: Summarization with
Manifold Information [77.89755281215079]
Text summarization is the research area aiming at creating a short and condensed version of the original document.
In real-world applications, most of the data is not in a plain text format.
This paper focuses on the survey of these new summarization tasks and approaches in the real-world application.
arXiv Detail & Related papers (2020-05-10T14:59:36Z) - A Hierarchical Network for Abstractive Meeting Summarization with
Cross-Domain Pretraining [52.11221075687124]
We propose a novel abstractive summary network that adapts to the meeting scenario.
We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers.
Our model outperforms previous approaches in both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-04-04T21:00:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.