Joint aggregation of cardinal and ordinal evaluations with an
application to a student paper competition
- URL: http://arxiv.org/abs/2101.04765v1
- Date: Tue, 12 Jan 2021 21:36:50 GMT
- Title: Joint aggregation of cardinal and ordinal evaluations with an
application to a student paper competition
- Authors: Dorit S. Hochbaum and Erick Moreno-Centeno
- Abstract summary: An important problem in decision theory concerns the aggregation of individual rankings/ratings into a collective evaluation.
We illustrate a new aggregation method in the context of the 2007 MSOM's student paper competition.
- Score: 0.5076419064097732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important problem in decision theory concerns the aggregation of
individual rankings/ratings into a collective evaluation. We illustrate a new
aggregation method in the context of the 2007 MSOM's student paper competition.
The aggregation problem in this competition poses two challenges. Firstly, each
paper was reviewed only by a very small fraction of the judges; thus the
aggregate evaluation is highly sensitive to the subjective scales chosen by the
judges. Secondly, the judges provided both cardinal and ordinal evaluations
(ratings and rankings) of the papers they reviewed. The contribution here is a
new robust methodology that jointly aggregates ordinal and cardinal evaluations
into a collective evaluation. This methodology is particularly suitable in
cases of incomplete evaluations -- i.e., when the individuals evaluate only a
strict subset of the objects. This approach is potentially useful in managerial
decision making problems by a committee selecting projects from a large set or
capital budgeting involving multiple priorities.
Related papers
- Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - Evaluating Agents using Social Choice Theory [21.26784305333596]
We argue that many general evaluation problems can be viewed through the lens of voting theory.
Each task is interpreted as a separate voter, which requires only ordinal rankings or pairwise comparisons of agents to produce an overall evaluation.
These evaluations are interpretable and flexible, while avoiding many of the problems currently facing cross-task evaluation.
arXiv Detail & Related papers (2023-12-05T20:40:37Z) - A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - Do You Hear The People Sing? Key Point Analysis via Iterative Clustering
and Abstractive Summarisation [12.548947151123555]
Argument summarisation is a promising but currently under-explored field.
One of the main challenges in Key Point Analysis is finding high-quality key point candidates.
evaluating key points is crucial in ensuring that the automatically generated summaries are useful.
arXiv Detail & Related papers (2023-05-25T12:43:29Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Debiasing Evaluations That are Biased by Evaluations [32.135315382120154]
We consider the problem of mitigating outcome-induced biases in ratings when some information about the outcome is available.
We propose a debiasing method by solving a regularized optimization problem under this ordering constraint.
We also provide a carefully designed cross-validation method that adaptively chooses the appropriate amount of regularization.
arXiv Detail & Related papers (2020-12-01T18:20:43Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z) - Rough Set based Aggregate Rank Measure & its Application to Supervised
Multi Document Summarization [0.0]
The paper proposes a novel Rough Set based membership called Rank Measure.
It shall be utilized for ranking the elements to a particular class.
The results proved to have significant improvement in accuracy.
arXiv Detail & Related papers (2020-02-09T01:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.