Every Bite Is an Experience: Key Point Analysis of Business Reviews
- URL: http://arxiv.org/abs/2106.06758v1
- Date: Sat, 12 Jun 2021 12:22:12 GMT
- Title: Every Bite Is an Experience: Key Point Analysis of Business Reviews
- Authors: Roy Bar-Haim, Lilach Eden, Yoav Kantor, Roni Friedman, Noam Slonim
- Abstract summary: Key Point Analysis (KPA) has been proposed as a summarization framework that provides both textual and quantitative summary of the main points in the data.
We show empirically that these novel extensions of KPA substantially improve its performance.
- Score: 12.364867281334096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous work on review summarization focused on measuring the sentiment
toward the main aspects of the reviewed product or business, or on creating a
textual summary. These approaches provide only a partial view of the data:
aspect-based sentiment summaries lack sufficient explanation or justification
for the aspect rating, while textual summaries do not quantify the significance
of each element, and are not well-suited for representing conflicting views.
Recently, Key Point Analysis (KPA) has been proposed as a summarization
framework that provides both textual and quantitative summary of the main
points in the data. We adapt KPA to review data by introducing Collective Key
Point Mining for better key point extraction; integrating sentiment analysis
into KPA; identifying good key point candidates for review summaries; and
leveraging the massive amount of available reviews and their metadata. We show
empirically that these novel extensions of KPA substantially improve its
performance. We demonstrate that promising results can be achieved without any
domain-specific annotation, while human supervision can lead to further
improvement.
Related papers
- Prompted Aspect Key Point Analysis for Quantitative Review Summarization [27.691150599517364]
Key Point Analysis aims for quantitative summarization that provides key points (KPs) as succinct textual summaries and quantities measuring their prevalence.
Recent abstractive approaches still generate KPs based on sentences, often leading to KPs with overlapping and hallucinated opinions.
We propose Prompted Aspect Key Point Analysis (PAKPA) for quantitative review summarization.
arXiv Detail & Related papers (2024-07-19T06:07:32Z) - FENICE: Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction [85.26780391682894]
We propose Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction (FENICE)
FENICE leverages an NLI-based alignment between information in the source document and a set of atomic facts, referred to as claims, extracted from the summary.
Our metric sets a new state of the art on AGGREFACT, the de-facto benchmark for factuality evaluation.
arXiv Detail & Related papers (2024-03-04T17:57:18Z) - Incremental Extractive Opinion Summarization Using Cover Trees [81.59625423421355]
In online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically.
In this work, we study the task of extractive opinion summarization in an incremental setting.
We present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting.
arXiv Detail & Related papers (2024-01-16T02:00:17Z) - Do You Hear The People Sing? Key Point Analysis via Iterative Clustering
and Abstractive Summarisation [12.548947151123555]
Argument summarisation is a promising but currently under-explored field.
One of the main challenges in Key Point Analysis is finding high-quality key point candidates.
evaluating key points is crucial in ensuring that the automatically generated summaries are useful.
arXiv Detail & Related papers (2023-05-25T12:43:29Z) - EntSUM: A Data Set for Entity-Centric Summarization [27.845014142019917]
Controllable summarization aims to provide summaries that take into account user-specified aspects and preferences.
We introduce a human-annotated data setSUM for controllable summarization with a focus on named entities as the aspects to control.
arXiv Detail & Related papers (2022-04-05T13:45:54Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised
Approach [89.56158561087209]
We study summarizing on arbitrary aspects relevant to the document.
Due to the lack of supervision data, we develop a new weak supervision construction method and an aspect modeling scheme.
Experiments show our approach achieves performance boosts on summarizing both real and synthetic documents.
arXiv Detail & Related papers (2020-10-14T03:20:46Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Quantitative Argument Summarization and Beyond: Cross-Domain Key Point
Analysis [17.875273745811775]
We develop a method for automatic extraction of key points, which enables fully automatic analysis.
We demonstrate that the applicability of key point analysis goes well beyond argumentation data.
An additional contribution is an in-depth evaluation of argument-to-key point matching models.
arXiv Detail & Related papers (2020-10-11T23:01:51Z) - Unsupervised Reference-Free Summary Quality Evaluation via Contrastive
Learning [66.30909748400023]
We propose to evaluate the summary qualities without reference summaries by unsupervised contrastive learning.
Specifically, we design a new metric which covers both linguistic qualities and semantic informativeness based on BERT.
Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.
arXiv Detail & Related papers (2020-10-05T05:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.