On the Role of Reviewer Expertise in Temporal Review Helpfulness
Prediction
- URL: http://arxiv.org/abs/2303.00923v1
- Date: Wed, 22 Feb 2023 23:41:22 GMT
- Title: On the Role of Reviewer Expertise in Temporal Review Helpfulness
Prediction
- Authors: Mir Tafseer Nayeem, Davood Rafiei
- Abstract summary: Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted.
We introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history, and the temporal dynamics of the reviews to automatically assess review helpfulness.
- Score: 5.381004207943597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Helpful reviews have been essential for the success of e-commerce services,
as they help customers make quick purchase decisions and benefit the merchants
in their sales. While many reviews are informative, others provide little value
and may contain spam, excessive appraisal, or unexpected biases. With the large
volume of reviews and their uneven quality, the problem of detecting helpful
reviews has drawn much attention lately. Existing methods for identifying
helpful reviews primarily focus on review text and ignore the two key factors
of (1) who post the reviews and (2) when the reviews are posted. Moreover, the
helpfulness votes suffer from scarcity for less popular products and recently
submitted (a.k.a., cold-start) reviews. To address these challenges, we
introduce a dataset and develop a model that integrates the reviewer's
expertise, derived from the past review history of the reviewers, and the
temporal dynamics of the reviews to automatically assess review helpfulness. We
conduct experiments on our dataset to demonstrate the effectiveness of
incorporating these factors and report improved results compared to several
well-established baselines.
Related papers
- Review Helpfulness Scores vs. Review Unhelpfulness Scores: Two Sides of the Same Coin or Different Coins? [1.0738561302102214]
We find that review unhelpfulness scores are not driven by intrinsic review characteristics.
Users who receive review unhelpfulness votes are more likely to cast unhelpfulness votes for other reviews.
arXiv Detail & Related papers (2024-04-24T10:35:17Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately [59.61932899841944]
Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website.
We propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS)
We employ a multi-task framework that conducts the review sentiment classification and summarization jointly.
arXiv Detail & Related papers (2023-01-27T12:32:55Z) - RevCore: Review-augmented Conversational Recommendation [45.70198581510986]
We design a novel end-to-end framework, namely, Review-augmented Conversational Recommender (RevCore), where reviews are seamlessly incorporated to enrich item information.
In detail, we extract sentiment-consistent reviews, perform review-enriched and entity-based recommendations for item suggestions, as well as use a review-attentive encoder-decoder for response generation.
arXiv Detail & Related papers (2021-06-02T05:46:01Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Improving Opinion Spam Detection by Cumulative Relative Frequency
Distribution [0.9176056742068814]
Various approaches have been proposed for detecting opinion spam in online reviews.
We re-engineered a set of effective features used for classifying opinion spam.
We show that the use of the distributional features is able to improve the performances of classifiers.
arXiv Detail & Related papers (2020-12-27T10:23:44Z) - User and Item-aware Estimation of Review Helpfulness [4.640835690336653]
We investigate the role of deviations in the properties of reviews as helpfulness determinants.
We propose a novel helpfulness estimation model that extends previous ones.
Our model is thus an effective tool to select relevant user feedback for decision-making.
arXiv Detail & Related papers (2020-11-20T15:35:56Z) - ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis [62.76038841302741]
We build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
Experimental results show that our review score predictor reaches 71.4%-100% accuracy.
Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time.
arXiv Detail & Related papers (2020-10-13T02:17:58Z) - How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements [8.471274313213092]
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
arXiv Detail & Related papers (2020-05-25T16:30:05Z) - Context-aware Helpfulness Prediction for Online Product Reviews [34.47368084659301]
We propose a neural deep learning model that predicts the helpfulness score of a review.
This model is based on convolutional neural network (CNN) and a context-aware encoding mechanism.
We validated our model on human annotated dataset and the result shows that our model significantly outperforms existing models for helpfulness prediction.
arXiv Detail & Related papers (2020-04-27T18:19:26Z) - Automating App Review Response Generation [67.58267006314415]
We propose a novel approach RRGen that automatically generates review responses by learning knowledge relations between reviews and their responses.
Experiments on 58 apps and 309,246 review-response pairs highlight that RRGen outperforms the baselines by at least 67.4% in terms of BLEU-4.
arXiv Detail & Related papers (2020-02-10T05:23:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.