Ants-Review: a Protocol for Incentivized Open Peer-Reviews on Ethereum
- URL: http://arxiv.org/abs/2101.09378v1
- Date: Fri, 22 Jan 2021 23:32:41 GMT
- Title: Ants-Review: a Protocol for Incentivized Open Peer-Reviews on Ethereum
- Authors: Bianca Trov\`o and Nazzareno Massari
- Abstract summary: We propose a blockchain-based incentive system that rewards scientists for peer-reviewing other scientists' work.
Authors can issue a bounty for open anonymous peer-review on smart contracts called Ants-Review.
If requirements are met, peer-reviews will be accepted and paid by the approver proportionally to their quality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer-review is a necessary and essential quality control step for scientific
publications but lacks proper incentives. Indeed, the process, which is very
costly in terms of time and intellectual investment, not only is not
remunerated by the journals but is also not openly recognized by the academic
community as a relevant scientific output for a researcher. Therefore,
scientific dissemination is affected in timeliness, quality, and fairness.
Here, to solve this issue, we propose a blockchain-based incentive system that
rewards scientists for peer-reviewing other scientists' work and that builds up
trust and reputation. We designed a privacy-oriented protocol of smart
contracts called Ants-Review that allows authors to issue a bounty for open
anonymous peer-reviews on Ethereum. If requirements are met, peer-reviews will
be accepted and paid by the approver proportionally to their assessed quality.
To promote ethical behavior and inclusiveness the system implements a gamified
mechanism that allows the whole community to evaluate the peer-reviews and vote
for the best ones.
Related papers
- BeerReview: A Blockchain-enabled Peer Review Platform [9.059774441296247]
BeerReview is a blockchain-enabled peer review platform.
It offers a robust solution, enabling experts and scholars to participate actively in the review process without concerns about plagiarism or security threats.
arXiv Detail & Related papers (2024-05-30T16:19:13Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Decentralized Peer Review in Open Science: A Mechanism Proposal [0.0]
We propose a community-owned and -governed system for peer review.
The system aims to increase quality and speed of peer review while lowering the chance and impact of erroneous judgements.
arXiv Detail & Related papers (2024-04-28T11:42:54Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - A Critical Examination of the Ethics of AI-Mediated Peer Review [0.0]
Recent advancements in artificial intelligence (AI) systems offer promise and peril for scholarly peer review.
Human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency.
The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos.
arXiv Detail & Related papers (2023-09-02T18:14:10Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL
Rolling Review and Beyond [58.71736531356398]
We present an in-depth discussion of peer reviewing data, outline the ethical and legal desiderata for peer reviewing data collection, and propose the first continuous, donation-based data collection workflow.
We report on the ongoing implementation of this workflow at the ACL Rolling Review and deliver the first insights obtained with the newly collected data.
arXiv Detail & Related papers (2022-01-27T11:02:43Z) - Auctions and Peer Prediction for Academic Peer Review [11.413240461538589]
We propose a novel peer prediction mechanism (H-DIPP) building on recent work in the information elicitation literature.
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
arXiv Detail & Related papers (2021-08-27T23:47:15Z) - A Measure of Research Taste [91.3755431537592]
We present a citation-based measure that rewards both productivity and taste.
The presented measure, CAP, balances the impact of publications and their quantity.
We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics.
arXiv Detail & Related papers (2021-05-17T18:01:47Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.