Some Ethical Issues in the Review Process of Machine Learning
Conferences
- URL: http://arxiv.org/abs/2106.00810v1
- Date: Tue, 1 Jun 2021 21:22:41 GMT
- Title: Some Ethical Issues in the Review Process of Machine Learning
Conferences
- Authors: Alessio Russo
- Abstract summary: Recent successes in the Machine Learning community have led to a steep increase in the number of papers submitted to conferences.
This increase made more prominent some of the issues that affect the current review process used by these conferences.
We study the problem of reviewers' recruitment, infringements of the double-blind process, fraudulent behaviors, biases in numerical ratings, and the appendix phenomenon.
- Score: 0.38073142980733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent successes in the Machine Learning community have led to a steep
increase in the number of papers submitted to conferences. This increase made
more prominent some of the issues that affect the current review process used
by these conferences. The review process has several issues that may undermine
the nature of scientific research, which is of being fully objective,
apolitical, unbiased and free of misconduct (such as plagiarism, cheating,
improper influence, and other improprieties). In this work, we study the
problem of reviewers' recruitment, infringements of the double-blind process,
fraudulent behaviors, biases in numerical ratings, and the appendix phenomenon
(i.e., the fact that it is becoming more common to publish results in the
appendix section of a paper). For each of these problems, we provide a short
description and possible solutions. The goal of this work is to raise awareness
in the Machine Learning community regarding these issues.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Generating Summaries for Scientific Paper Review [29.12631698162247]
The increase of submissions for top venues in machine learning and NLP has caused a problem of excessive burden on reviewers.
An automatic system for assisting with the reviewing process could be a solution for ameliorating the problem.
In this paper, we explore automatic review summary generation for scientific papers.
arXiv Detail & Related papers (2021-09-28T21:43:53Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - A Large Scale Randomized Controlled Trial on Herding in Peer-Review
Discussions [33.261698377782075]
We aim to understand whether reviewers and more senior decision makers get disproportionately influenced by the first argument presented in a discussion.
Specifically, we design and execute a randomized controlled trial with the goal of testing for the conditional causal effect of the discussion initiator's opinion on the outcome of a paper.
arXiv Detail & Related papers (2020-11-30T18:23:07Z) - The Influence of Domain-Based Preprocessing on Subject-Specific
Clustering [55.41644538483948]
The sudden change of moving the majority of teaching online at Universities has caused an increased amount of workload for academics.
One way to deal with this problem is to cluster these questions depending on their topic.
In this paper, we explore the realms of tagging data sets, focusing on identifying code excerpts and providing empirical results.
arXiv Detail & Related papers (2020-11-16T17:47:19Z) - Evolving Methods for Evaluating and Disseminating Computing Research [4.0318506932466445]
Social and technical trends have significantly changed methods for evaluating and disseminating computing research.
Traditional venues for reviewing and publishing, such as conferences and journals, worked effectively in the past.
Many conferences have seen large increases in the number of submissions.
Dis dissemination of research ideas has become dramatically through publication venues such as arXiv.org and social media networks.
arXiv Detail & Related papers (2020-07-02T16:50:28Z) - Text and Causal Inference: A Review of Using Text to Remove Confounding
from Causal Estimates [15.69581581445705]
An individual's entire history of social media posts or the content of a news article could provide a rich measurement of confounders.
Despite increased attention on adjusting for confounding using text, there are still many open problems.
arXiv Detail & Related papers (2020-05-01T23:20:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.