An Enhanced MeanSum Method For Generating Hotel Multi-Review
Summarizations
- URL: http://arxiv.org/abs/2012.03656v2
- Date: Tue, 20 Apr 2021 14:43:28 GMT
- Title: An Enhanced MeanSum Method For Generating Hotel Multi-Review
Summarizations
- Authors: Saibo Geng, Diego Antognini
- Abstract summary: This work uses Multi-Aspect Masker(MAM) as content selector to address the issue with multi-aspect.
We also propose a regularizer to control the length of the generated summaries.
Our improved model achieves higher ROUGE, Sentiment Accuracy than the original Meansum method.
- Score: 0.06091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-document summaritazion is the process of taking multiple texts as input
and producing a short summary text based on the content of input texts. Up
until recently, multi-document summarizers are mostly supervised extractive.
However, supervised methods require datasets of large, paired document-summary
examples which are rare and expensive to produce. In 2018, an unsupervised
multi-document abstractive summarization method(Meansum) was proposed by Chu
and Liu, and demonstrated competitive performances comparing to extractive
methods. Despite good evaluation results on automatic metrics, Meansum has
multiple limitations, notably the inability of dealing with multiple aspects.
The aim of this work was to use Multi-Aspect Masker(MAM) as content selector to
address the issue with multi-aspect. Moreover, we propose a regularizer to
control the length of the generated summaries. Through a series of experiments
on the hotel dataset from Trip Advisor, we validate our assumption and show
that our improved model achieves higher ROUGE, Sentiment Accuracy than the
original Meansum method and also beats/ comprarable/close to the supervised
baseline.
Related papers
- Attributable and Scalable Opinion Summarization [79.87892048285819]
We generate abstractive summaries by decoding frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings.
Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process.
It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens.
arXiv Detail & Related papers (2023-05-19T11:30:37Z) - Align and Attend: Multimodal Summarization with Dual Contrastive Losses [57.83012574678091]
The goal of multimodal summarization is to extract the most important information from different modalities to form output summaries.
Existing methods fail to leverage the temporal correspondence between different modalities and ignore the intrinsic correlation between different samples.
We introduce Align and Attend Multimodal Summarization (A2Summ), a unified multimodal transformer-based model which can effectively align and attend the multimodal input.
arXiv Detail & Related papers (2023-03-13T17:01:42Z) - MACSum: Controllable Summarization with Mixed Attributes [56.685735509260276]
MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
arXiv Detail & Related papers (2022-11-09T17:17:37Z) - ACM -- Attribute Conditioning for Abstractive Multi Document
Summarization [0.0]
We propose a model that incorporates attribute conditioning modules in order to decouple conflicting information by conditioning for a certain attribute in the output summary.
This approach shows strong gains in ROUGE score over baseline multi document summarization approaches.
arXiv Detail & Related papers (2022-05-09T00:00:14Z) - Unsupervised Summarization with Customized Granularities [76.26899748972423]
We propose the first unsupervised multi-granularity summarization framework, GranuSum.
By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner.
arXiv Detail & Related papers (2022-01-29T05:56:35Z) - Reinforcing Semantic-Symmetry for Document Summarization [15.113768658584979]
Document summarization condenses a long document into a short version with salient information and accurate semantic descriptions.
This paper introduces a new textbfreinforcing stextbfemantic-textbfsymmetry learning textbfmodel is proposed for document summarization.
A series of experiments have been conducted on two wildly used benchmark datasets CNN/Daily Mail and BigPatent.
arXiv Detail & Related papers (2021-12-14T17:41:37Z) - Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues
and Documents [13.755637074366813]
SummN is a simple, flexible, and effective multi-stage framework for input texts longer than the maximum context lengths of typical pretrained LMs.
It can process input text of arbitrary length by adjusting the number of stages while keeping the LM context size fixed.
Our experiments demonstrate that SummN significantly outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2021-10-16T06:19:54Z) - Massive Multi-Document Summarization of Product Reviews with Weak
Supervision [11.462916848094403]
Product reviews summarization is a type of Multi-Document Summarization (MDS) task.
We show that summarizing small samples of the reviews can result in loss of important information.
We propose a schema for summarizing a massive set of reviews on top of a standard summarization algorithm.
arXiv Detail & Related papers (2020-07-22T11:22:57Z) - SummPip: Unsupervised Multi-Document Summarization with Sentence Graph
Compression [61.97200991151141]
SummPip is an unsupervised method for multi-document summarization.
We convert the original documents to a sentence graph, taking both linguistic and deep representation into account.
We then apply spectral clustering to obtain multiple clusters of sentences, and finally compress each cluster to generate the final summary.
arXiv Detail & Related papers (2020-07-17T13:01:15Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Interpretable Multi-Headed Attention for Abstractive Summarization at
Controllable Lengths [14.762731718325002]
Multi-level Summarizer (MLS) is a supervised method to construct abstractive summaries of a text document at controllable lengths.
MLS outperforms strong baselines by up to 14.70% in the METEOR score.
arXiv Detail & Related papers (2020-02-18T19:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.