Combining State-of-the-Art Models with Maximal Marginal Relevance for
Few-Shot and Zero-Shot Multi-Document Summarization
- URL: http://arxiv.org/abs/2211.10808v1
- Date: Sat, 19 Nov 2022 21:46:31 GMT
- Title: Combining State-of-the-Art Models with Maximal Marginal Relevance for
Few-Shot and Zero-Shot Multi-Document Summarization
- Authors: David Adams, Gandharv Suri, Yllias Chali
- Abstract summary: Multi-document summarization (MDS) poses many challenges to researchers above those posed by single-document summarization (SDS)
We propose a strategy for combining state-of-the-art models' outputs using maximal marginal relevance (MMR)
Our MMR-based approach shows improvement over some aspects of the current state-of-the-art results in both few-shot and zero-shot MDS applications.
- Score: 0.6690874707758508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Natural Language Processing, multi-document summarization (MDS) poses many
challenges to researchers above those posed by single-document summarization
(SDS). These challenges include the increased search space and greater
potential for the inclusion of redundant information. While advancements in
deep learning approaches have led to the development of several advanced
language models capable of summarization, the variety of training data specific
to the problem of MDS remains relatively limited. Therefore, MDS approaches
which require little to no pretraining, known as few-shot or zero-shot
applications, respectively, could be beneficial additions to the current set of
tools available in summarization. To explore one possible approach, we devise a
strategy for combining state-of-the-art models' outputs using maximal marginal
relevance (MMR) with a focus on query relevance rather than document diversity.
Our MMR-based approach shows improvement over some aspects of the current
state-of-the-art results in both few-shot and zero-shot MDS applications while
maintaining a state-of-the-art standard of output by all available metrics.
Related papers
- A Survey on Mixture of Experts [11.801185267119298]
The mixture of experts (MoE) has emerged as an effective method for substantially scaling up model capacity with minimal overhead.
MoE has emerged as an effective method for substantially scaling up model capacity with minimal overhead.
This survey seeks to bridge that gap, serving as an essential resource for researchers delving into the intricacies of MoE.
arXiv Detail & Related papers (2024-06-26T16:34:33Z) - Simplifying Multimodality: Unimodal Approach to Multimodal Challenges in Radiology with General-Domain Large Language Model [3.012719451477384]
We introduce MID-M, a novel framework that leverages the in-context learning capabilities of a general-domain Large Language Model (LLM) to process multimodal data via image descriptions.
MID-M achieves a comparable or superior performance to task-specific fine-tuned LMMs and other general-domain ones, without the extensive domain-specific training or pre-training on multimodal data.
The robustness of MID-M against data quality issues demonstrates its practical utility in real-world medical domain applications.
arXiv Detail & Related papers (2024-04-29T13:23:33Z) - Model Composition for Multimodal Large Language Models [73.70317850267149]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document
Summarization [66.08074487429477]
Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS)
We propose a new method to better utilize a PLM to facilitate multi-document interactions for the multi-document summarization (MDS) task.
Our method outperforms its corresponding PLM backbone by up to 3 Rouge-L and is favored by humans.
arXiv Detail & Related papers (2023-05-15T10:03:31Z) - A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion [54.512440195060584]
We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
arXiv Detail & Related papers (2022-07-07T16:57:21Z) - A Multi-Document Coverage Reward for RELAXed Multi-Document
Summarization [11.02198476454955]
We propose fine-tuning an MDS baseline with a reward that balances a reference-based metric with coverage of the input documents.
Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to +0.95 pp average ROUGE score and +3.17 pp METEOR score over the baseline.
arXiv Detail & Related papers (2022-03-06T07:33:01Z) - Topic-Guided Abstractive Multi-Document Summarization [21.856615677793243]
A critical point of multi-document summarization (MDS) is to learn the relations among various documents.
We propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph.
We employ a neural topic model to jointly discover latent topics that can act as cross-document semantic units.
arXiv Detail & Related papers (2021-10-21T15:32:30Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - SupMMD: A Sentence Importance Model for Extractive Summarization using
Maximum Mean Discrepancy [92.5683788430012]
SupMMD is a novel technique for generic and update summarization based on the maximum discrepancy from kernel two-sample testing.
We show the efficacy of SupMMD in both generic and update summarization tasks by meeting or exceeding the current state-of-the-art on the DUC-2004 and TAC-2009 datasets.
arXiv Detail & Related papers (2020-10-06T09:26:55Z) - Multi-document Summarization with Maximal Marginal Relevance-guided
Reinforcement Learning [54.446686397551275]
We present RL-MMR, which unifies advanced neural SDS methods and statistical measures used in classical MDS.
RL-MMR casts MMR guidance on fewer promising candidates, which restrains the search space and thus leads to better representation learning.
arXiv Detail & Related papers (2020-09-30T21:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.