Aggregating Incomplete and Noisy Rankings
- URL: http://arxiv.org/abs/2011.00810v2
- Date: Sun, 27 Jun 2021 13:01:15 GMT
- Title: Aggregating Incomplete and Noisy Rankings
- Authors: Dimitris Fotakis, Alkis Kalavasis, Konstantinos Stavropoulos
- Abstract summary: We consider the problem of learning the true ordering of a set of alternatives from largely incomplete and noisy rankings.
Our selective Mallows model outputs a noisy ranking on any given subset of alternatives.
We show how to efficiently compute the maximum likelihood complete ranking from selective Mallows rankings.
- Score: 13.267203883254087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning the true ordering of a set of
alternatives from largely incomplete and noisy rankings. We introduce a natural
generalization of both the classical Mallows model of ranking distributions and
the extensively studied model of noisy pairwise comparisons. Our selective
Mallows model outputs a noisy ranking on any given subset of alternatives,
based on an underlying Mallows distribution. Assuming a sequence of subsets
where each pair of alternatives appears frequently enough, we obtain strong
asymptotically tight upper and lower bounds on the sample complexity of
learning the underlying complete ranking and the (identities and the) ranking
of the top-k alternatives from selective Mallows rankings. Moreover, building
on the work of (Braverman and Mossel, 2009), we show how to efficiently compute
the maximum likelihood complete ranking from selective Mallows rankings.
Related papers
- Ranking with Slot Constraints [25.715799711674457]
We devise a new ranking algorithm, called MatchRank, for slot-constrained problems.
The goal of MatchRank is to produce rankings that maximize the number of filled slots if candidates are evaluated by a human decision maker in the order of the ranking.
Our theoretical analysis shows that MatchRank has a strong approximation guarantee without any independence assumptions between slots or candidates.
arXiv Detail & Related papers (2023-10-27T03:14:50Z) - Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models [63.714662435555674]
Large language models (LLMs) exhibit positional bias in how they use context.
We propose permutation self-consistency, a form of self-consistency over ranking list outputs of black-box LLMs.
Our approach improves scores from conventional inference by up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B)
arXiv Detail & Related papers (2023-10-11T17:59:02Z) - Partitioned Saliency Ranking with Dense Pyramid Transformers [4.449304130658638]
Saliency ranking has emerged as a challenging task focusing on assessing the degree of saliency at instance-level.
Previous approaches undertake the saliency ranking by directly sorting the rank scores of salient instances, which have not explicitly resolved the inherent ambiguities.
We propose the ranking by partition paradigm, which segments unordered salient instances into partitions and then ranks them based on the correlations among these partitions.
arXiv Detail & Related papers (2023-08-01T02:33:10Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - Consensus-Guided Correspondence Denoising [67.35345850146393]
We propose to denoise correspondences with a local-to-global consensus learning framework to robustly identify correspondence.
A novel "pruning" block is introduced to distill reliable candidates from initial matches according to their consensus scores estimated by dynamic graphs from local to global regions.
Our method outperforms state-of-the-arts on robust line fitting, wide-baseline image matching and image localization benchmarks by noticeable margins.
arXiv Detail & Related papers (2021-01-03T09:10:00Z) - Concentric mixtures of Mallows models for top-$k$ rankings: sampling and
identifiability [0.0]
We consider mixtures of two Mallows models for top-$k$ rankings.
We propose efficient sampling algorithms for Mallows top-$k$ rankings.
arXiv Detail & Related papers (2020-10-27T13:00:37Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z) - Fast and Robust Rank Aggregation against Model Misspecification [105.54181634234266]
In rank aggregation (RA), a collection of preferences from different users are summarized into a total order under the assumption of homogeneity of users.
Model misspecification in RA arises since the homogeneity assumption fails to be satisfied in the complex real-world situation.
We propose CoarsenRank, which possesses robustness against model misspecification.
arXiv Detail & Related papers (2019-05-29T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.