ReAct: A Review Comment Dataset for Actionability (and more)
- URL: http://arxiv.org/abs/2210.00443v1
- Date: Sun, 2 Oct 2022 07:09:38 GMT
- Title: ReAct: A Review Comment Dataset for Actionability (and more)
- Authors: Gautam Choudhary, Natwar Modani, Nitish Maurya
- Abstract summary: We introduce an annotated review comment dataset ReAct.
The review comments are sourced from OpenReview site.
We crowd-source annotations for these reviews for actionability and type of comments.
- Score: 0.8885727065823155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Review comments play an important role in the evolution of documents. For a
large document, the number of review comments may become large, making it
difficult for the authors to quickly grasp what the comments are about. It is
important to identify the nature of the comments to identify which comments
require some action on the part of document authors, along with identifying the
types of these comments. In this paper, we introduce an annotated review
comment dataset ReAct. The review comments are sourced from OpenReview site. We
crowd-source annotations for these reviews for actionability and type of
comments. We analyze the properties of the dataset and validate the quality of
annotations. We release the dataset (https://github.com/gtmdotme/ReAct) to the
research community as a major contribution. We also benchmark our data with
standard baselines for classification tasks and analyze their performance.
Related papers
- Hold On! Is My Feedback Useful? Evaluating the Usefulness of Code Review Comments [0.0]
This paper investigates the usefulness of Code Review Comments (CR comments) through textual feature-based and featureless approaches.
Our models outperform the baseline by achieving state-of-the-art performance.
Our analyses portray the similarities and differences of domains, projects, datasets, models, and features for predicting the usefulness of CR comments.
arXiv Detail & Related papers (2025-01-12T07:22:13Z) - SEAGraph: Unveiling the Whole Story of Paper Review Comments [26.39115060771725]
In the traditional peer review process, authors often receive vague or insufficiently detailed feedback.
This raises the critical question of how to enhance authors' comprehension of review comments.
We present SEAGraph, a novel framework developed to clarify review comments by uncovering the underlying intentions.
arXiv Detail & Related papers (2024-12-16T16:24:36Z) - Evaluating D-MERIT of Partial-annotation on Information Retrieval [77.44452769932676]
Retrieval models are often evaluated on partially-annotated datasets.
We show that using partially-annotated datasets in evaluation can paint a distorted picture.
arXiv Detail & Related papers (2024-06-23T08:24:08Z) - Incremental Extractive Opinion Summarization Using Cover Trees [81.59625423421355]
In online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically.
In this work, we study the task of extractive opinion summarization in an incremental setting.
We present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting.
arXiv Detail & Related papers (2024-01-16T02:00:17Z) - ViCo: Engaging Video Comment Generation with Human Preference Rewards [68.50351391812723]
We propose ViCo with three novel designs to tackle the challenges for generating engaging Video Comments.
To quantify the engagement of comments, we utilize the number of "likes" each comment receives as a proxy of human preference.
To automatically evaluate the engagement of comments, we train a reward model to align its judgment to the above proxy.
arXiv Detail & Related papers (2023-08-22T04:01:01Z) - Exploring the Advances in Identifying Useful Code Review Comments [0.0]
This paper reflects the evolution of research on the usefulness of code review comments.
It examines papers that define the usefulness of code review comments, mine and annotate datasets, study developers' perceptions, analyze factors from different aspects, and use machine learning classifiers to automatically predict the usefulness of code review comments.
arXiv Detail & Related papers (2023-07-03T00:41:20Z) - On the Role of Reviewer Expertise in Temporal Review Helpfulness
Prediction [5.381004207943597]
Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted.
We introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history, and the temporal dynamics of the reviews to automatically assess review helpfulness.
arXiv Detail & Related papers (2023-02-22T23:41:22Z) - Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately [59.61932899841944]
Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website.
We propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS)
We employ a multi-task framework that conducts the review sentiment classification and summarization jointly.
arXiv Detail & Related papers (2023-01-27T12:32:55Z) - Abstractive Opinion Tagging [65.47649273721679]
In e-commerce, opinion tags refer to a ranked list of tags provided by the e-commerce platform that reflect characteristics of reviews of an item.
Current mechanisms for generating opinion tags rely on either manual or labelling methods, which is time-consuming and ineffective.
We propose an abstractive opinion tagging framework, named AOT-Net, to generate a ranked list of opinion tags given a large number of reviews.
arXiv Detail & Related papers (2021-01-18T05:08:15Z) - ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis [62.76038841302741]
We build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
Experimental results show that our review score predictor reaches 71.4%-100% accuracy.
Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time.
arXiv Detail & Related papers (2020-10-13T02:17:58Z) - How Useful are Reviews for Recommendation? A Critical Review and
Potential Improvements [8.471274313213092]
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
Our initial findings reveal several discrepancies in reported results, partly due to copying results across papers despite changes in experimental settings or data pre-processing.
Further investigation calls for discussion on a much larger problem about the "importance" of user reviews for recommendation.
arXiv Detail & Related papers (2020-05-25T16:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.