Learning Rich Rankings
- URL: http://arxiv.org/abs/2312.15081v1
- Date: Fri, 22 Dec 2023 21:40:57 GMT
- Title: Learning Rich Rankings
- Authors: Arjun Seshadri, Stephen Ragain, Johan Ugander
- Abstract summary: We develop a contextual repeated selection (CRS) model to bring a natural multimodality and richness to the rankings space.
We provide theoretical guarantees for maximum likelihood estimation under the model through structure-dependent tail risk and expected risk bounds.
We also furnish the first tight bounds on the expected risk of maximum likelihood estimators for the multinomial logit (MNL) choice model and the Plackett-Luce (PL) ranking model.
- Score: 7.940293148084844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the foundations of ranking are well established, the ranking
literature has primarily been focused on simple, unimodal models, e.g. the
Mallows and Plackett-Luce models, that define distributions centered around a
single total ordering. Explicit mixture models have provided some tools for
modelling multimodal ranking data, though learning such models from data is
often difficult. In this work, we contribute a contextual repeated selection
(CRS) model that leverages recent advances in choice modeling to bring a
natural multimodality and richness to the rankings space. We provide rigorous
theoretical guarantees for maximum likelihood estimation under the model
through structure-dependent tail risk and expected risk bounds. As a
by-product, we also furnish the first tight bounds on the expected risk of
maximum likelihood estimators for the multinomial logit (MNL) choice model and
the Plackett-Luce (PL) ranking model, as well as the first tail risk bound on
the PL ranking model. The CRS model significantly outperforms existing methods
for modeling real world ranking data in a variety of settings, from racing to
rank choice voting.
Related papers
- Are all models wrong? Fundamental limits in distribution-free empirical model falsification [5.059120569845977]
We establish a model-agnostic, fundamental hardness result for the problem of constructing a lower bound on the best test error achievable over a model class.
We examine its implications on specific model classes such as tree-based methods and linear regression.
arXiv Detail & Related papers (2025-02-10T18:44:30Z) - Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels [64.94853276821992]
Large multimodal models (LMMs) are increasingly deployed across diverse applications.
Traditional evaluation methods are largely dataset-centric, relying on fixed, labeled datasets and supervised metrics.
We explore unsupervised model ranking for LMMs by leveraging their uncertainty signals, such as softmax probabilities.
arXiv Detail & Related papers (2024-12-09T13:05:43Z) - Model Selection Through Model Sorting [1.534667887016089]
We propose a model order selection method called nested empirical risk (NER)
In the UCR data set, the NER method reduces the complexity of the classification of UCR datasets dramatically.
arXiv Detail & Related papers (2024-09-15T09:43:59Z) - Statistical Models of Top-$k$ Partial Orders [7.121002367542985]
We introduce and taxonomize approaches for jointly modeling distributions over top-$k$ partial orders and list lengths $k$.
Using data consisting of partial rankings from San Francisco school choice and San Francisco ranked choice elections, we evaluate how well the models predict observed data.
arXiv Detail & Related papers (2024-06-22T17:04:24Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - RewardBench: Evaluating Reward Models for Language Modeling [100.28366840977966]
We present RewardBench, a benchmark dataset and code-base for evaluation of reward models.
The dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety.
On the RewardBench leaderboard, we evaluate reward models trained with a variety of methods.
arXiv Detail & Related papers (2024-03-20T17:49:54Z) - Improving Discriminative Multi-Modal Learning with Large-Scale
Pre-Trained Models [51.5543321122664]
This paper investigates how to better leverage large-scale pre-trained uni-modal models to enhance discriminative multi-modal learning.
We introduce Multi-Modal Low-Rank Adaptation learning (MMLoRA)
arXiv Detail & Related papers (2023-10-08T15:01:54Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.