How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements
- URL: http://arxiv.org/abs/2005.12210v1
- Date: Mon, 25 May 2020 16:30:05 GMT
- Title: How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements
- Authors: Noveen Sachdeva, Julian McAuley
- Abstract summary: We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
- Score: 8.471274313213092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate a growing body of work that seeks to improve recommender
systems through the use of review text. Generally, these papers argue that
since reviews 'explain' users' opinions, they ought to be useful to infer the
underlying dimensions that predict ratings or purchases. Schemes to incorporate
reviews range from simple regularizers to neural network approaches. Our
initial findings reveal several discrepancies in reported results, partly due
to (e.g.) copying results across papers despite changes in experimental
settings or data pre-processing. First, we attempt a comprehensive analysis to
resolve these ambiguities. Further investigation calls for discussion on a much
larger problem about the "importance" of user reviews for recommendation.
Through a wide range of experiments, we observe several cases where
state-of-the-art methods fail to outperform existing baselines, especially as
we deviate from a few narrowly-defined settings where reviews are useful. We
conclude by providing hypotheses for our observations, that seek to
characterize under what conditions reviews are likely to be helpful. Through
this work, we aim to evaluate the direction in which the field is progressing
and encourage robust empirical evaluation.
Related papers
- Generative Adversarial Reviews: When LLMs Become the Critic [1.2430809884830318]
We introduce Generative Agent Reviewers (GAR), leveraging LLM-empowered agents to simulate faithful peer reviewers.
Central to this approach is a graph-based representation of manuscripts, condensing content and logically organizing information.
Our experiments demonstrate that GAR performs comparably to human reviewers in providing detailed feedback and predicting paper outcomes.
arXiv Detail & Related papers (2024-12-09T06:58:17Z) - On the Role of Reviewer Expertise in Temporal Review Helpfulness
Prediction [5.381004207943597]
Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted.
We introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history, and the temporal dynamics of the reviews to automatically assess review helpfulness.
arXiv Detail & Related papers (2023-02-22T23:41:22Z) - On Faithfulness and Coherence of Language Explanations for
Recommendation Systems [8.143715142450876]
This work probes state-of-the-art models and their review generation component.
We show that the generated explanations are brittle and need further evaluation before being taken as literal rationales for the estimated ratings.
arXiv Detail & Related papers (2022-09-12T17:00:31Z) - Measuring "Why" in Recommender Systems: a Comprehensive Survey on the
Evaluation of Explainable Recommendation [87.82664566721917]
This survey is based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI.
arXiv Detail & Related papers (2022-02-14T02:58:55Z) - Learning Opinion Summarizers by Selecting Informative Reviews [81.47506952645564]
We collect a large dataset of summaries paired with user reviews for over 31,000 products, enabling supervised training.
The content of many reviews is not reflected in the human-written summaries, and, thus, the summarizer trained on random review subsets hallucinates.
We formulate the task as jointly learning to select informative subsets of reviews and summarizing the opinions expressed in these subsets.
arXiv Detail & Related papers (2021-09-09T15:01:43Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.