User and Item-aware Estimation of Review Helpfulness
- URL: http://arxiv.org/abs/2011.10456v1
- Date: Fri, 20 Nov 2020 15:35:56 GMT
- Title: User and Item-aware Estimation of Review Helpfulness
- Authors: Noemi Mauro and Liliana Ardissono and Giovanna Petrone
- Abstract summary: We investigate the role of deviations in the properties of reviews as helpfulness determinants.
We propose a novel helpfulness estimation model that extends previous ones.
Our model is thus an effective tool to select relevant user feedback for decision-making.
- Score: 4.640835690336653
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In online review sites, the analysis of user feedback for assessing its
helpfulness for decision-making is usually carried out by locally studying the
properties of individual reviews. However, global properties should be
considered as well to precisely evaluate the quality of user feedback. In this
paper we investigate the role of deviations in the properties of reviews as
helpfulness determinants with the intuition that "out of the core" feedback
helps item evaluation. We propose a novel helpfulness estimation model that
extends previous ones with the analysis of deviations in rating, length and
polarity with respect to the reviews written by the same person, or concerning
the same item. A regression analysis carried out on two large datasets of
reviews extracted from Yelp social network shows that user-based deviations in
review length and rating clearly influence perceived helpfulness. Moreover, an
experiment on the same datasets shows that the integration of our helpfulness
estimation model improves the performance of a collaborative recommender system
by enhancing the selection of high-quality data for rating estimation. Our
model is thus an effective tool to select relevant user feedback for
decision-making.
Related papers
- Direct Judgement Preference Optimization [66.83088028268318]
We train large language models (LLMs) as generative judges to evaluate and critique other models' outputs.
We employ three approaches to collect the preference pairs for different use cases, each aimed at improving our generative judge from a different perspective.
Our model robustly counters inherent biases such as position and length bias, flexibly adapts to any evaluation protocol specified by practitioners, and provides helpful language feedback for improving downstream generator models.
arXiv Detail & Related papers (2024-09-23T02:08:20Z) - Analytical and Empirical Study of Herding Effects in Recommendation Systems [72.6693986712978]
We study how to manage product ratings via rating aggregation rules and shortlisted representative reviews.
We show that proper recency aware rating aggregation rules can improve the speed of convergence in Amazon and TripAdvisor.
arXiv Detail & Related papers (2024-08-20T14:29:23Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Review-based Recommender Systems: A Survey of Approaches, Challenges and Future Perspectives [11.835903510784735]
Review-based recommender systems have emerged as a significant sub-field in this domain.
We present a categorization of these systems and summarize the state-of-the-art methods, analyzing their unique features, effectiveness, and limitations.
We propose potential directions for future research, including the integration of multimodal data, multi-criteria rating information, and ethical considerations.
arXiv Detail & Related papers (2024-05-09T05:45:18Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Exploiting Correlated Auxiliary Feedback in Parameterized Bandits [56.84649080789685]
We study a novel variant of the parameterized bandits problem in which the learner can observe additional auxiliary feedback that is correlated with the observed reward.
The auxiliary feedback is readily available in many real-life applications, e.g., an online platform that wants to recommend the best-rated services to its users can observe the user's rating of service (rewards) and collect additional information like service delivery time (auxiliary feedback)
arXiv Detail & Related papers (2023-11-05T17:27:06Z) - On Faithfulness and Coherence of Language Explanations for
Recommendation Systems [8.143715142450876]
This work probes state-of-the-art models and their review generation component.
We show that the generated explanations are brittle and need further evaluation before being taken as literal rationales for the estimated ratings.
arXiv Detail & Related papers (2022-09-12T17:00:31Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements [8.471274313213092]
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
arXiv Detail & Related papers (2020-05-25T16:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.