MODABS: Multi-Objective Learning for Dynamic Aspect-Based Summarization
- URL: http://arxiv.org/abs/2406.03479v2
- Date: Mon, 17 Jun 2024 19:56:37 GMT
- Title: MODABS: Multi-Objective Learning for Dynamic Aspect-Based Summarization
- Authors: Xiaobo Guo, Soroush Vosoughi,
- Abstract summary: We introduce a novel multi-objective learning framework employing a Longformer-Encoder-Decoder for this task.
We show our method significantly outperforms baselines on three diverse datasets.
- Score: 29.111115148808196
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid proliferation of online content necessitates effective summarization methods, among which dynamic aspect-based summarization stands out. Unlike its traditional counterpart, which assumes a fixed set of known aspects, this approach adapts to the varied aspects of the input text. We introduce a novel multi-objective learning framework employing a Longformer-Encoder-Decoder for this task. The framework optimizes aspect number prediction, minimizes disparity between generated and reference summaries for each aspect, and maximizes dissimilarity across aspect-specific summaries. Extensive experiments show our method significantly outperforms baselines on three diverse datasets, largely due to the effective alignment of generated and reference aspect counts without sacrificing single-aspect summarization quality.
Related papers
- Multi-Dimensional Optimization for Text Summarization via Reinforcement Learning [12.083649916114402]
We propose multi-objective reinforcement learning tailored to generate balanced summaries across all four dimensions.
Unlike prior ROUGE-based rewards relying on reference summaries, we use a QA-based reward model that aligns with human preferences.
Our approach achieved substantial performance gains compared to baseline models on representative summarization datasets.
arXiv Detail & Related papers (2024-06-01T05:15:12Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Semi-supervised multi-view concept decomposition [30.699496411869834]
Concept Factorization (CF) has demonstrated superior performance in multi-view clustering tasks.
We propose a novel semi-supervised multi-view concept factorization model, named SMVCF.
We conduct experiments on four diverse datasets to evaluate the performance of SMVCF.
arXiv Detail & Related papers (2023-07-03T10:50:44Z) - Summary-Oriented Vision Modeling for Multimodal Abstractive
Summarization [63.320005222549646]
Multimodal abstractive summarization (MAS) aims to produce a concise summary given the multimodal data (text and vision)
We propose to improve the summary quality through summary-oriented visual features.
Experiments on 44 languages, covering mid-high, low-, and zero-resource scenarios, verify the effectiveness and superiority of the proposed approach.
arXiv Detail & Related papers (2022-12-15T09:05:26Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Multi-view Information Bottleneck Without Variational Approximation [34.877573432746246]
We extend the information bottleneck principle to a supervised multi-view learning scenario.
We use the recently proposed matrix-based R'enyi's $alpha$-order entropy functional to optimize the resulting objective.
Empirical results in both synthetic and real-world datasets suggest that our method enjoys improved robustness to noise and redundant information in each view.
arXiv Detail & Related papers (2022-04-22T06:48:04Z) - Attentive Multi-View Deep Subspace Clustering Net [4.3386084277869505]
We propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN)
Our proposed method seeks to find a joint latent representation that explicitly considers both consensus and view-specific information.
The experimental results on seven real-world data sets have demonstrated the effectiveness of our proposed algorithm against some state-of-the-art subspace learning approaches.
arXiv Detail & Related papers (2021-12-23T12:57:26Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.