Concentric mixtures of Mallows models for top-$k$ rankings: sampling and
identifiability
- URL: http://arxiv.org/abs/2010.14260v2
- Date: Thu, 5 Nov 2020 10:59:28 GMT
- Title: Concentric mixtures of Mallows models for top-$k$ rankings: sampling and
identifiability
- Authors: Collas Fabien and Irurozki Ekhine
- Abstract summary: We consider mixtures of two Mallows models for top-$k$ rankings.
We propose efficient sampling algorithms for Mallows top-$k$ rankings.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider mixtures of two Mallows models for top-$k$
rankings, both with the same location parameter but with different scale
parameters, i.e., a mixture of concentric Mallows models. This situation arises
when we have a heterogeneous population of voters formed by two homogeneous
populations, one of which is a subpopulation of expert voters while the other
includes the non-expert voters. We propose efficient sampling algorithms for
Mallows top-$k$ rankings. We show the identifiability of both components, and
the learnability of their respective parameters in this setting by, first,
bounding the sample complexity for the Borda algorithm with top-$k$ rankings
and second, proposing polynomial time algorithm for the separation of the
rankings in each component. Finally, since the rank aggregation will suffer
from a large amount of noise introduced by the non-expert voters, we adapt the
Borda algorithm to be able to recover the ground truth consensus ranking which
is especially consistent with the expert rankings.
Related papers
- Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models [63.714662435555674]
Large language models (LLMs) exhibit positional bias in how they use context.
We propose permutation self-consistency, a form of self-consistency over ranking list outputs of black-box LLMs.
Our approach improves scores from conventional inference by up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B)
arXiv Detail & Related papers (2023-10-11T17:59:02Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Active Ranking of Experts Based on their Performances in Many Tasks [72.96112117037465]
We consider the problem of ranking n experts based on their performances on d tasks.
We make a monotonicity assumption stating that for each pair of experts, one outperforms the other on all tasks.
arXiv Detail & Related papers (2023-06-05T06:55:39Z) - Byzantine Spectral Ranking [3.4272203252843294]
We study the problem of rank aggregation where the goal is to obtain a global ranking by aggregating pair-wise comparisons of voters over a set of items.
We introduce the Byzantine Spectral Ranking Algorithm, which produces a reliable ranking when the number of good voters exceeds the number of Byzantine voters.
arXiv Detail & Related papers (2022-11-15T05:18:19Z) - MANI-Rank: Multiple Attribute and Intersectional Group Fairness for
Consensus Ranking [6.231376714841276]
Group fairness in rankings and in particular rank aggregation remains in its infancy.
Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute.
Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers.
We are the first to define and solve this open Multi-attribute Fair Consensus Ranking problem.
arXiv Detail & Related papers (2022-07-20T16:36:20Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - Optimal Full Ranking from Pairwise Comparisons [16.852801934478475]
The minimax rate of ranking exhibits a transition between an exponential rate and a rate depending on the magnitude of the signal-to-noise ratio of the problem.
To achieve the minimax rate, we propose a divide-and-conquer ranking algorithm that first divides the $n$ players into groups of similar skills and then computes local MLE within each group.
arXiv Detail & Related papers (2021-01-21T03:34:44Z) - Aggregating Incomplete and Noisy Rankings [13.267203883254087]
We consider the problem of learning the true ordering of a set of alternatives from largely incomplete and noisy rankings.
Our selective Mallows model outputs a noisy ranking on any given subset of alternatives.
We show how to efficiently compute the maximum likelihood complete ranking from selective Mallows rankings.
arXiv Detail & Related papers (2020-11-02T08:18:33Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.