State-of-Art-Reviewing: A Radical Proposal to Improve Scientific
Publication
- URL: http://arxiv.org/abs/2003.14415v1
- Date: Tue, 31 Mar 2020 17:58:36 GMT
- Title: State-of-Art-Reviewing: A Radical Proposal to Improve Scientific
Publication
- Authors: Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques
- Abstract summary: State-Of-the-Art Review (SOAR) is a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review.
At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation.
- Score: 19.10668029301668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer review forms the backbone of modern scientific manuscript evaluation.
But after two hundred and eighty-nine years of egalitarian service to the
scientific community, does this protocol remain fit for purpose in 2020? In
this work, we answer this question in the negative (strong reject, high
confidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric
reviewing pipeline that serves as a 'plug-and-play' replacement for peer
review. At the heart of our approach is an interpretation of the review process
as a multi-objective, massively distributed and extremely-high-latency
optimisation, which we scalarise and solve efficiently for PAC and CMT-optimal
solutions. We make the following contributions: (1) We propose a highly
scalable, fully automatic methodology for review, drawing inspiration from
best-practices from premier computer vision and machine learning conferences;
(2) We explore several instantiations of our approach and demonstrate that SOAR
can be used to both review prints and pre-review pre-prints; (3) We wander
listlessly in vain search of catharsis from our latest rounds of savage CVPR
rejections.
Related papers
- Deep Transfer Learning Based Peer Review Aggregation and Meta-review Generation for Scientific Articles [2.0778556166772986]
We address two peer review aggregation challenges: paper acceptance decision-making and meta-review generation.
Firstly, we propose to automate the process of acceptance decision prediction by applying traditional machine learning algorithms.
For the meta-review generation, we propose a transfer learning model based on the T5 model.
arXiv Detail & Related papers (2024-10-05T15:40:37Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Unveiling the Sentinels: Assessing AI Performance in Cybersecurity Peer
Review [4.081120388114928]
In the field of cybersecurity, the practice of double-blind peer review is the de-facto standard.
This paper touches on the holy grail of peer reviewing and aims to shed light on the performance of AI in reviewing for academic security conferences.
We investigate the predictability of reviewing outcomes by comparing the results obtained from human reviewers and machine-learning models.
arXiv Detail & Related papers (2023-09-11T13:51:40Z) - Has the Machine Learning Review Process Become More Arbitrary as the
Field Has Grown? The NeurIPS 2021 Consistency Experiment [86.77085171670323]
We present a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.
We observe that the two committees disagree on their accept/reject recommendations for 23% of the papers and that, consistent with the results from 2014, approximately half of the list of accepted papers would change if the review process were randomly rerun.
arXiv Detail & Related papers (2023-06-05T21:26:12Z) - Towards a Standardised Performance Evaluation Protocol for Cooperative
MARL [2.2977300225306583]
Multi-agent reinforcement learning (MARL) has emerged as a useful approach to solving decentralised decision-making problems at scale.
We take a closer look at this rapid development with a focus on evaluation methodologies employed across a large body of research in cooperative MARL.
We propose a standardised performance evaluation protocol for cooperative MARL.
arXiv Detail & Related papers (2022-09-21T16:40:03Z) - Automated scholarly paper review: Concepts, technologies, and challenges [5.431798850623952]
Recent years have seen the application of artificial intelligence (AI) in assisting the peer review process.
With the involvement of humans, such limitations remain inevitable.
arXiv Detail & Related papers (2021-11-15T04:44:57Z) - Towards Explainable Scientific Venue Recommendations [0.09668407688201358]
We propose an unsophisticated method that advances the state-of-the-art in this area.
First, we enhance the interpretability of recommendations through non-negative matrix factorization based topic models.
Second, we surprisingly can obtain competitive recommendation performance while using simpler learning methods.
arXiv Detail & Related papers (2021-09-21T10:25:26Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.