Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately
- URL: http://arxiv.org/abs/2301.11682v1
- Date: Fri, 27 Jan 2023 12:32:55 GMT
- Title: Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately
- Authors: Xin Cheng, Shen Gao, Yuchi Zhang, Yongliang Wang, Xiuying Chen,
Mingzhe Li, Dongyan Zhao and Rui Yan
- Abstract summary: Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website.
We propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS)
We employ a multi-task framework that conducts the review sentiment classification and summarization jointly.
- Score: 59.61932899841944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Review summarization is a non-trivial task that aims to summarize the main
idea of the product review in the E-commerce website. Different from the
document summary which only needs to focus on the main facts described in the
document, review summarization should not only summarize the main aspects
mentioned in the review but also reflect the personal style of the review
author. Although existing review summarization methods have incorporated the
historical reviews of both customer and product, they usually simply
concatenate and indiscriminately model this two heterogeneous information into
a long sequence. Moreover, the rating information can also provide a high-level
abstraction of customer preference, it has not been used by the majority of
methods. In this paper, we propose the Heterogeneous Historical Review aware
Review Summarization Model (HHRRS) which separately models the two types of
historical reviews with the rating information by a graph reasoning module with
a contrastive loss. We employ a multi-task framework that conducts the review
sentiment classification and summarization jointly. Extensive experiments on
four benchmark datasets demonstrate the superiority of HHRRS on both tasks.
Related papers
- LFOSum: Summarizing Long-form Opinions with Large Language Models [7.839083566878183]
This paper introduces (1) a new dataset of long-form user reviews, each entity comprising over a thousand reviews, (2) two training-free LLM-based summarization approaches that scale to long inputs, and (3) automatic evaluation metrics.
Our dataset of user reviews is paired with in-depth and unbiased critical summaries by domain experts, serving as a reference for evaluation.
Our evaluation reveals that LLMs still face challenges in balancing sentiment and format adherence in long-form summaries, though open-source models can narrow the gap when relevant information is retrieved in a focused manner.
arXiv Detail & Related papers (2024-10-16T20:52:39Z) - Podcast Summary Assessment: A Resource for Evaluating Summary Assessment
Methods [42.08097583183816]
We describe a new dataset, the podcast summary assessment corpus.
This dataset has two unique aspects: (i) long-input, speech podcast based, documents; and (ii) an opportunity to detect inappropriate reference summaries in podcast corpus.
arXiv Detail & Related papers (2022-08-28T18:24:41Z) - Efficient Few-Shot Fine-Tuning for Opinion Summarization [83.76460801568092]
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples.
We show that a few-shot method based on adapters can easily store in-domain knowledge.
We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets.
arXiv Detail & Related papers (2022-05-04T16:38:37Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - A Unified Dual-view Model for Review Summarization and Sentiment
Classification with Inconsistency Loss [51.448615489097236]
Acquiring accurate summarization and sentiment from user reviews is an essential component of modern e-commerce platforms.
We propose a novel dual-view model that jointly improves the performance of these two tasks.
Experiment results on four real-world datasets from different domains demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2020-06-02T13:34:11Z) - Topic Detection and Summarization of User Reviews [6.779855791259679]
We propose an effective new summarization method by analyzing both reviews and summaries.
A new dataset comprising product reviews and summaries about 1028 products are collected from Amazon and CNET.
arXiv Detail & Related papers (2020-05-30T02:19:08Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.