Understanding Peer Review of Software Engineering Papers
- URL: http://arxiv.org/abs/2009.01209v2
- Date: Thu, 17 Jun 2021 22:52:25 GMT
- Title: Understanding Peer Review of Software Engineering Papers
- Authors: Neil A. Ernst and Jeffrey C. Carver and Daniel Mendez and Marco
Torchiano
- Abstract summary: We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers.
The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty.
Authors should make the contribution of the work very clear in their paper.
- Score: 5.744593856232663
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Peer review is a key activity intended to preserve the quality and integrity
of scientific publications. However, in practice it is far from perfect.
We aim at understanding how reviewers, including those who have won awards
for reviewing, perform their reviews of software engineering papers to identify
both what makes a good reviewing approach and what makes a good paper.
We first conducted a series of in-person interviews with well-respected
reviewers in the software engineering field. Then, we used the results of those
interviews to develop a questionnaire used in an online survey and sent out to
reviewers from well-respected venues covering a number of software engineering
disciplines, some of whom had won awards for their reviewing efforts.
We analyzed the responses from the interviews and from 175 reviewers who
completed the online survey (including both reviewers who had won awards and
those who had not). We report on several descriptive results, including: 45% of
award-winners are reviewing 20+ conference papers a year, while 28% of
non-award winners conduct that many. 88% of reviewers are taking more than two
hours on journal reviews. We also report on qualitative results. To write a
good review, the important criteria were it should be factual and helpful,
ranked above others such as being detailed or kind. The most important features
of papers that result in positive reviews are clear and supported validation,
an interesting problem, and novelty. Conversely, negative reviews tend to
result from papers that have a mismatch between the method and the claims and
from those with overly grandiose claims.
The main recommendation for authors is to make the contribution of the work
very clear in their paper. In addition, reviewers viewed data availability and
its consistency as being important.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Improving Code Reviewer Recommendation: Accuracy, Latency, Workload, and
Bystanders [6.538051328482194]
We build upon the recommender that has been in production since 2018 RevRecV1.
We find that reviewers were being assigned based on prior authorship of files.
Having an individual who is responsible for the review, reduces the time take for reviews by -11%.
arXiv Detail & Related papers (2023-12-28T17:55:13Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Prior and Prejudice: The Novice Reviewers' Bias against Resubmissions in
Conference Peer Review [35.24369486197371]
Modern machine learning and computer science conferences are experiencing a surge in the number of submissions that challenges the quality of peer review.
Several conferences have started encouraging or even requiring authors to declare the previous submission history of their papers.
We investigate whether reviewers exhibit a bias caused by the knowledge that the submission under review was previously rejected at a similar venue.
arXiv Detail & Related papers (2020-11-30T09:35:37Z) - ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis [62.76038841302741]
We build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
Experimental results show that our review score predictor reaches 71.4%-100% accuracy.
Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time.
arXiv Detail & Related papers (2020-10-13T02:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.