Matching Papers and Reviewers at Large Conferences
- URL: http://arxiv.org/abs/2202.12273v2
- Date: Fri, 25 Feb 2022 20:06:18 GMT
- Title: Matching Papers and Reviewers at Large Conferences
- Authors: Kevin Leyton-Brown and Mausam and Yatin Nandwani and Hedayat Zarkoob
and Chris Cameron and Neil Newman and Dinesh Raghu
- Abstract summary: This paper studies a novel reviewer-paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021)
This approach has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer-paper scores; (2) formulating and solving an optimization problem to find good reviewer-paper matchings; and (3) the introduction of a novel, two-phase reviewing process that shifted reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary.
- Score: 25.79501640609188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a novel reviewer-paper matching approach that was recently
deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021),
and has since been adopted by other conferences including AAAI 2022 and ICML
2022. This approach has three main elements: (1) collecting and processing
input data to identify problematic matches and generate reviewer-paper scores;
(2) formulating and solving an optimization problem to find good reviewer-paper
matchings; and (3) the introduction of a novel, two-phase reviewing process
that shifted reviewing resources away from papers likely to be rejected and
towards papers closer to the decision boundary. This paper also describes an
evaluation of these innovations based on an extensive post-hoc analysis on real
data -- including a comparison with the matching algorithm used in AAAI's
previous (2020) iteration -- and supplements this with additional numerical
experimentation.
Related papers
- JudgeRank: Leveraging Large Language Models for Reasoning-Intensive Reranking [81.88787401178378]
We introduce JudgeRank, a novel agentic reranker that emulates human cognitive processes when assessing document relevance.
We evaluate JudgeRank on the reasoning-intensive BRIGHT benchmark, demonstrating substantial performance improvements over first-stage retrieval methods.
In addition, JudgeRank performs on par with fine-tuned state-of-the-art rerankers on the popular BEIR benchmark, validating its zero-shot generalization capability.
arXiv Detail & Related papers (2024-10-31T18:43:12Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - A Comparison of Approaches for Imbalanced Classification Problems in the
Context of Retrieving Relevant Documents for an Analysis [0.0]
The study compares query expansion techniques, topic model-based classification rules, and active as well as passive supervised learning.
Results show that query expansion techniques and topic model-based classification rules in most studied settings tend to decrease rather than increase retrieval performance.
arXiv Detail & Related papers (2022-05-03T16:22:42Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z) - A SUPER* Algorithm to Optimize Paper Bidding in Peer Review [39.99497980352629]
We present an algorithm called SUPER*, inspired by the A* algorithm, for this goal.
Under a community model for the similarities, we prove that SUPER* is near-optimal whereas the popular baselines are considerably suboptimal.
In experiments on real data from ICLR 2018 and synthetic data, we find that SUPER* considerably outperforms baselines deployed in existing systems.
arXiv Detail & Related papers (2020-06-27T06:44:49Z) - State-of-Art-Reviewing: A Radical Proposal to Improve Scientific
Publication [19.10668029301668]
State-Of-the-Art Review (SOAR) is a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review.
At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation.
arXiv Detail & Related papers (2020-03-31T17:58:36Z) - Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge [91.55319616114943]
This paper summarizes the supported tasks (i.e., kinship verification, tri-subject verification, and search & retrieval of missing children) in the Recognizing Families In the Wild (RFIW) evaluation.
The purpose of this paper is to describe the 2020 RFIW challenge, end-to-end, along with forecasts in promising future directions.
arXiv Detail & Related papers (2020-02-15T02:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.