Decentralized Peer Review in Open Science: A Mechanism Proposal
- URL: http://arxiv.org/abs/2404.18148v1
- Date: Sun, 28 Apr 2024 11:42:54 GMT
- Title: Decentralized Peer Review in Open Science: A Mechanism Proposal
- Authors: Andreas Finke, Thomas Hensel,
- Abstract summary: We propose a community-owned and -governed system for peer review.
The system aims to increase quality and speed of peer review while lowering the chance and impact of erroneous judgements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Peer review is a laborious, yet essential, part of academic publishing with crucial impact on the scientific endeavor. The current lack of incentives and transparency harms the credibility of this process. Researchers are neither rewarded for superior nor penalized for bad reviews. Additionally, confidential reports cause a loss of insights and make the review process vulnerable to scientific misconduct. We propose a community-owned and -governed system that 1) remunerates reviewers for their efforts, 2) publishes the (anonymized) reports for scrutiny by the community, 3) tracks reputation of reviewers and 4) provides digital certificates. Automated by transparent smart-contract blockchain technology, the system aims to increase quality and speed of peer review while lowering the chance and impact of erroneous judgements.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Auctions and Peer Prediction for Academic Peer Review [11.413240461538589]
We propose a novel peer prediction mechanism (H-DIPP) building on recent work in the information elicitation literature.
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
arXiv Detail & Related papers (2021-08-27T23:47:15Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Ants-Review: a Protocol for Incentivized Open Peer-Reviews on Ethereum [0.0]
We propose a blockchain-based incentive system that rewards scientists for peer-reviewing other scientists' work.
Authors can issue a bounty for open anonymous peer-review on smart contracts called Ants-Review.
If requirements are met, peer-reviews will be accepted and paid by the approver proportionally to their quality.
arXiv Detail & Related papers (2021-01-22T23:32:41Z) - Prior and Prejudice: The Novice Reviewers' Bias against Resubmissions in
Conference Peer Review [35.24369486197371]
Modern machine learning and computer science conferences are experiencing a surge in the number of submissions that challenges the quality of peer review.
Several conferences have started encouraging or even requiring authors to declare the previous submission history of their papers.
We investigate whether reviewers exhibit a bias caused by the knowledge that the submission under review was previously rejected at a similar venue.
arXiv Detail & Related papers (2020-11-30T09:35:37Z) - What Can We Do to Improve Peer Review in NLP? [69.11622020605431]
We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons.
There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
arXiv Detail & Related papers (2020-10-08T09:32:21Z) - Understanding Peer Review of Software Engineering Papers [5.744593856232663]
We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers.
The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty.
Authors should make the contribution of the work very clear in their paper.
arXiv Detail & Related papers (2020-09-02T17:31:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.