What Can We Do to Improve Peer Review in NLP?
- URL: http://arxiv.org/abs/2010.03863v1
- Date: Thu, 8 Oct 2020 09:32:21 GMT
- Title: What Can We Do to Improve Peer Review in NLP?
- Authors: Anna Rogers, Isabelle Augenstein
- Abstract summary: We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons.
There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
- Score: 69.11622020605431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer review is our best tool for judging the quality of conference
submissions, but it is becoming increasingly spurious. We argue that a part of
the problem is that the reviewers and area chairs face a poorly defined task
forcing apples-to-oranges comparisons. There are several potential ways
forward, but the key difficulty is creating the incentives and mechanisms for
their consistent implementation in the NLP community.
Related papers
- Group Fairness in Peer Review [44.580732477017904]
This paper introduces a notion of group fairness, called the core, which requires that every possible community (subset of researchers) be treated in a way that prevents them from unilaterally benefiting from withdrawing from a large conference.
We study a simple peer review model, prove that it always admits a reviewing assignment in the core, and design an efficient algorithm to find one such assignment.
arXiv Detail & Related papers (2024-10-04T14:48:10Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - PCL: Peer-Contrastive Learning with Diverse Augmentations for
Unsupervised Sentence Embeddings [69.87899694963251]
We propose a novel Peer-Contrastive Learning (PCL) with diverse augmentations.
PCL constructs diverse contrastive positives and negatives at the group level for unsupervised sentence embeddings.
PCL can perform peer-positive contrast as well as peer-network cooperation, which offers an inherent anti-bias ability.
arXiv Detail & Related papers (2022-01-28T13:02:41Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Peer Selection with Noisy Assessments [43.307040330622186]
We extend PeerNomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination.
We show analytically that a weighting scheme can improve the overall accuracy of the selection significantly.
arXiv Detail & Related papers (2021-07-21T14:47:11Z) - Confidence-Budget Matching for Sequential Budgeted Learning [69.77435313099366]
We formalize decision-making problems with querying budget.
We consider multi-armed bandits, linear bandits, and reinforcement learning problems.
We show that CBM based algorithms perform well in the presence of adversity.
arXiv Detail & Related papers (2021-02-05T19:56:31Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.