ReAct: A Review Comment Dataset for Actionability (and more)
- URL: http://arxiv.org/abs/2210.00443v1
- Date: Sun, 2 Oct 2022 07:09:38 GMT
- Title: ReAct: A Review Comment Dataset for Actionability (and more)
- Authors: Gautam Choudhary, Natwar Modani, Nitish Maurya
- Abstract summary: We introduce an annotated review comment dataset ReAct.
The review comments are sourced from OpenReview site.
We crowd-source annotations for these reviews for actionability and type of comments.
- Score: 0.8885727065823155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Review comments play an important role in the evolution of documents. For a
large document, the number of review comments may become large, making it
difficult for the authors to quickly grasp what the comments are about. It is
important to identify the nature of the comments to identify which comments
require some action on the part of document authors, along with identifying the
types of these comments. In this paper, we introduce an annotated review
comment dataset ReAct. The review comments are sourced from OpenReview site. We
crowd-source annotations for these reviews for actionability and type of
comments. We analyze the properties of the dataset and validate the quality of
annotations. We release the dataset (https://github.com/gtmdotme/ReAct) to the
research community as a major contribution. We also benchmark our data with
standard baselines for classification tasks and analyze their performance.
Related papers
- A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Incremental Extractive Opinion Summarization Using Cover Trees [81.59625423421355]
In online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically.
In this work, we study the task of extractive opinion summarization in an incremental setting.
We present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting.
arXiv Detail & Related papers (2024-01-16T02:00:17Z) - ViCo: Engaging Video Comment Generation with Human Preference Rewards [68.50351391812723]
We propose ViCo with three novel designs to tackle the challenges for generating engaging Video Comments.
To quantify the engagement of comments, we utilize the number of "likes" each comment receives as a proxy of human preference.
To automatically evaluate the engagement of comments, we train a reward model to align its judgment to the above proxy.
arXiv Detail & Related papers (2023-08-22T04:01:01Z) - Exploring the Advances in Identifying Useful Code Review Comments [0.0]
This paper reflects the evolution of research on the usefulness of code review comments.
It examines papers that define the usefulness of code review comments, mine and annotate datasets, study developers' perceptions, analyze factors from different aspects, and use machine learning classifiers to automatically predict the usefulness of code review comments.
arXiv Detail & Related papers (2023-07-03T00:41:20Z) - On the Role of Reviewer Expertise in Temporal Review Helpfulness
Prediction [5.381004207943597]
Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted.
We introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history, and the temporal dynamics of the reviews to automatically assess review helpfulness.
arXiv Detail & Related papers (2023-02-22T23:41:22Z) - Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately [59.61932899841944]
Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website.
We propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS)
We employ a multi-task framework that conducts the review sentiment classification and summarization jointly.
arXiv Detail & Related papers (2023-01-27T12:32:55Z) - Abstractive Opinion Tagging [65.47649273721679]
In e-commerce, opinion tags refer to a ranked list of tags provided by the e-commerce platform that reflect characteristics of reviews of an item.
Current mechanisms for generating opinion tags rely on either manual or labelling methods, which is time-consuming and ineffective.
We propose an abstractive opinion tagging framework, named AOT-Net, to generate a ranked list of opinion tags given a large number of reviews.
arXiv Detail & Related papers (2021-01-18T05:08:15Z) - Improving Document-Level Sentiment Analysis with User and Product
Context [16.47527363427252]
We investigate incorporating additional review text available at the time of sentiment prediction.
We achieve this by explicitly storing representations of reviews written by the same user and about the same product.
Experiment results on IMDB, Yelp 2013 and Yelp 2014 datasets show improvement to state-of-the-art of more than 2 percentage points in the best case.
arXiv Detail & Related papers (2020-11-18T10:59:14Z) - ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis [62.76038841302741]
We build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
Experimental results show that our review score predictor reaches 71.4%-100% accuracy.
Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time.
arXiv Detail & Related papers (2020-10-13T02:17:58Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z) - How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements [8.471274313213092]
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
arXiv Detail & Related papers (2020-05-25T16:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.