Combining State-of-the-Art Models with Maximal Marginal Relevance for
Few-Shot and Zero-Shot Multi-Document Summarization
- URL: http://arxiv.org/abs/2211.10808v1
- Date: Sat, 19 Nov 2022 21:46:31 GMT
- Title: Combining State-of-the-Art Models with Maximal Marginal Relevance for
Few-Shot and Zero-Shot Multi-Document Summarization
- Authors: David Adams, Gandharv Suri, Yllias Chali
- Abstract summary: Multi-document summarization (MDS) poses many challenges to researchers above those posed by single-document summarization (SDS)
We propose a strategy for combining state-of-the-art models' outputs using maximal marginal relevance (MMR)
Our MMR-based approach shows improvement over some aspects of the current state-of-the-art results in both few-shot and zero-shot MDS applications.
- Score: 0.6690874707758508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Natural Language Processing, multi-document summarization (MDS) poses many
challenges to researchers above those posed by single-document summarization
(SDS). These challenges include the increased search space and greater
potential for the inclusion of redundant information. While advancements in
deep learning approaches have led to the development of several advanced
language models capable of summarization, the variety of training data specific
to the problem of MDS remains relatively limited. Therefore, MDS approaches
which require little to no pretraining, known as few-shot or zero-shot
applications, respectively, could be beneficial additions to the current set of
tools available in summarization. To explore one possible approach, we devise a
strategy for combining state-of-the-art models' outputs using maximal marginal
relevance (MMR) with a focus on query relevance rather than document diversity.
Our MMR-based approach shows improvement over some aspects of the current
state-of-the-art results in both few-shot and zero-shot MDS applications while
maintaining a state-of-the-art standard of output by all available metrics.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - Towards Robust Multimodal Sentiment Analysis with Incomplete Data [20.75292807497547]
We present an innovative Language-dominated Noise-resistant Learning Network (LNLN) to achieve robust Multimodal Sentiment Analysis (MSA)
LNLN features a dominant modality correction (DMC) module and dominant modality based multimodal learning (DMML) module, which enhances the model's robustness across various noise scenarios.
arXiv Detail & Related papers (2024-09-30T07:14:31Z) - MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct [148.39859547619156]
We propose MMEvol, a novel multimodal instruction data evolution framework.
MMEvol iteratively improves data quality through a refined combination of fine-grained perception, cognitive reasoning, and interaction evolution.
Our approach reaches state-of-the-art (SOTA) performance in nine tasks using significantly less data compared to state-of-the-art models.
arXiv Detail & Related papers (2024-09-09T17:44:00Z) - Assessing Modality Bias in Video Question Answering Benchmarks with Multimodal Large Language Models [12.841405829775852]
We introduce the modality importance score (MIS) to identify bias inVidQA benchmarks and datasets.
We also propose an innovative method using state-of-the-art MLLMs to estimate the modality importance.
Our results indicate that current models do not effectively integrate information due to modality imbalance in existing datasets.
arXiv Detail & Related papers (2024-08-22T23:32:42Z) - Simplifying Multimodality: Unimodal Approach to Multimodal Challenges in Radiology with General-Domain Large Language Model [3.012719451477384]
We introduce MID-M, a novel framework that leverages the in-context learning capabilities of a general-domain Large Language Model (LLM) to process multimodal data via image descriptions.
MID-M achieves a comparable or superior performance to task-specific fine-tuned LMMs and other general-domain ones, without the extensive domain-specific training or pre-training on multimodal data.
The robustness of MID-M against data quality issues demonstrates its practical utility in real-world medical domain applications.
arXiv Detail & Related papers (2024-04-29T13:23:33Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - A Multi-Document Coverage Reward for RELAXed Multi-Document
Summarization [11.02198476454955]
We propose fine-tuning an MDS baseline with a reward that balances a reference-based metric with coverage of the input documents.
Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to +0.95 pp average ROUGE score and +3.17 pp METEOR score over the baseline.
arXiv Detail & Related papers (2022-03-06T07:33:01Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - SupMMD: A Sentence Importance Model for Extractive Summarization using
Maximum Mean Discrepancy [92.5683788430012]
SupMMD is a novel technique for generic and update summarization based on the maximum discrepancy from kernel two-sample testing.
We show the efficacy of SupMMD in both generic and update summarization tasks by meeting or exceeding the current state-of-the-art on the DUC-2004 and TAC-2009 datasets.
arXiv Detail & Related papers (2020-10-06T09:26:55Z) - Multi-document Summarization with Maximal Marginal Relevance-guided
Reinforcement Learning [54.446686397551275]
We present RL-MMR, which unifies advanced neural SDS methods and statistical measures used in classical MDS.
RL-MMR casts MMR guidance on fewer promising candidates, which restrains the search space and thus leads to better representation learning.
arXiv Detail & Related papers (2020-09-30T21:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.