Understanding and Supporting Peer Review Using AI-reframed Positive Summary
- URL: http://arxiv.org/abs/2503.10264v1
- Date: Thu, 13 Mar 2025 11:22:12 GMT
- Title: Understanding and Supporting Peer Review Using AI-reframed Positive Summary
- Authors: Chi-Lan Yang, Alarith Uhde, Naomi Yamashita, Hideaki Kuzuoka,
- Abstract summary: This study explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task.<n>We found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors' critique acceptance.<n>We discuss the implications of using AI in peer feedback, focusing on how it can influence critique acceptance and support research communities.
- Score: 18.686807993563168
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: While peer review enhances writing and research quality, harsh feedback can frustrate and demotivate authors. Hence, it is essential to explore how critiques should be delivered to motivate authors and enable them to keep iterating their work. In this study, we explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task, alongside varying levels of overall evaluations (high vs. low), on authors' feedback reception, revision outcomes, and motivation to revise. Through a 2x2 online experiment with 137 participants, we found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors' critique acceptance, whereas low overall evaluations of their work led to increased revision efforts. We discuss the implications of using AI in peer feedback, focusing on how AI-driven critiques can influence critique acceptance and support research communities in fostering productive and friendly peer feedback practices.
Related papers
- LazyReview A Dataset for Uncovering Lazy Thinking in NLP Peer Reviews [74.87393214734114]
This work introduces LazyReview, a dataset of peer-review sentences annotated with fine-grained lazy thinking categories.
Large Language Models (LLMs) struggle to detect these instances in a zero-shot setting.
instruction-based fine-tuning on our dataset significantly boosts performance by 10-20 performance points.
arXiv Detail & Related papers (2025-04-15T10:07:33Z) - Evaluating Trust in AI, Human, and Co-produced Feedback Among Undergraduate Students [2.935250567679577]
Students generally preferred AI and co-produced feedback over human feedback in terms of perceived usefulness and objectivity.
Male students consistently rated all feedback types as less valuable than their female and non-binary counterparts.
These insights inform evidence-based guidelines for integrating AI into higher education feedback systems.
arXiv Detail & Related papers (2025-04-15T08:06:36Z) - Can LLM feedback enhance review quality? A randomized study of 20K reviews at ICLR 2025 [115.86204862475864]
Review Feedback Agent provides automated feedback on vague comments, content misunderstandings, and unprofessional remarks to reviewers.
It was implemented at ICLR 2025 as a large randomized control study.
27% of reviewers who received feedback updated their reviews, and over 12,000 feedback suggestions from the agent were incorporated by those reviewers.
arXiv Detail & Related papers (2025-04-13T22:01:25Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.
We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - ReviewEval: An Evaluation Framework for AI-Generated Reviews [9.35023998408983]
This research introduces a comprehensive evaluation framework for AI-generated reviews.<n>It measures alignment with human evaluations, verifies factual accuracy, assesses analytical depth, and identifies actionable insights.<n>Our framework establishes standardized metrics for evaluating AI-based review systems.
arXiv Detail & Related papers (2025-02-17T12:22:11Z) - SEAGraph: Unveiling the Whole Story of Paper Review Comments [26.39115060771725]
In the traditional peer review process, authors often receive vague or insufficiently detailed feedback.<n>This raises the critical question of how to enhance authors' comprehension of review comments.<n>We present SEAGraph, a novel framework developed to clarify review comments by uncovering the underlying intentions.
arXiv Detail & Related papers (2024-12-16T16:24:36Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation [13.854903594424876]
Large language models (LLMs) have demonstrated strong performance in generating coherent and contextually relevant text.
This work explores several prompting strategies for LLM-based zero-shot and few-shot generation of essay feedback.
Inspired by Chain-of-Thought prompting, we study how and to what extent automated essay scoring (AES) can benefit the quality of generated feedback.
arXiv Detail & Related papers (2024-04-24T12:48:06Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.<n>This paper presents a thorough analysis of these literature reviews within the PAMI field.<n>We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Auctions and Peer Prediction for Academic Peer Review [11.413240461538589]
We propose a novel peer prediction mechanism (H-DIPP) building on recent work in the information elicitation literature.
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
arXiv Detail & Related papers (2021-08-27T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.