Learning to Rank when Grades Matter
- URL: http://arxiv.org/abs/2306.08650v2
- Date: Tue, 20 Jun 2023 16:47:08 GMT
- Title: Learning to Rank when Grades Matter
- Authors: Le Yan, Zhen Qin, Gil Shamir, Dong Lin, Xuanhui Wang, Mike Bendersky
- Abstract summary: Graded labels are ubiquitous in real-world learning-to-rank applications.
Traditional learning-to-rank techniques ignore predicting actual grades.
We propose a multiobjective formulation to jointly optimize both ranking and grade predictions.
- Score: 11.981942948477236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graded labels are ubiquitous in real-world learning-to-rank applications,
especially in human rated relevance data. Traditional learning-to-rank
techniques aim to optimize the ranked order of documents. They typically,
however, ignore predicting actual grades. This prevents them from being adopted
in applications where grades matter, such as filtering out ``poor'' documents.
Achieving both good ranking performance and good grade prediction performance
is still an under-explored problem. Existing research either focuses only on
ranking performance by not calibrating model outputs, or treats grades as
numerical values, assuming labels are on a linear scale and failing to leverage
the ordinal grade information. In this paper, we conduct a rigorous study of
learning to rank with grades, where both ranking performance and grade
prediction performance are important. We provide a formal discussion on how to
perform ranking with non-scalar predictions for grades, and propose a
multiobjective formulation to jointly optimize both ranking and grade
predictions. In experiments, we verify on several public datasets that our
methods are able to push the Pareto frontier of the tradeoff between ranking
and grade prediction performance, showing the benefit of leveraging ordinal
grade information.
Related papers
- Improving the classification of extreme classes by means of loss regularisation and generalised beta distributions [8.640930010669042]
We propose a unimodal regularisation approach to improve the classification performance of the first and last classes.
Performance in the extreme classes is compared using a new metric that takes into account their sensitivities.
The results for the proposed metric show that the generalised beta distribution generally improves classification performance in the extreme classes.
arXiv Detail & Related papers (2024-07-17T08:57:42Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - GaussianMLR: Learning Implicit Class Significance via Calibrated
Multi-Label Ranking [0.0]
We propose a novel multi-label ranking method: GaussianMLR.
It aims to learn implicit class significance values that determine the positive label ranks.
We show that our method is able to accurately learn a representation of the incorporated positive rank order.
arXiv Detail & Related papers (2023-03-07T14:09:08Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Data-Centric Machine Learning in the Legal Domain [0.2624902795082451]
This paper explores how changes in a data set influence the measured performance of a model.
Using three publicly available data sets from the legal domain, we investigate how changes to their size, the train/test splits, and the human labelling accuracy impact the performance.
The observed effects are surprisingly pronounced, especially when the per-class performance is considered.
arXiv Detail & Related papers (2022-01-17T23:05:14Z) - Data Driven and Visualization based Strategization for University Rank
Improvement using Decision Trees [1.933681537640272]
We present a novel idea of classifying the rankings data using Decision Tree (DT) based algorithms and retrieve decision paths for rank improvement using data visualization techniques.
The proposed methodology can aid HEIs to quantitatively asses the scope of improvement, adumbrate a fine-grained long-term action plan and prepare a suitable road-map.
arXiv Detail & Related papers (2021-10-18T06:41:45Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Rank over Class: The Untapped Potential of Ranking in Natural Language
Processing [8.637110868126546]
We argue that many tasks which are currently addressed using classification are in fact being shoehorned into a classification mould.
We propose a novel end-to-end ranking approach consisting of a Transformer network responsible for producing representations for a pair of text sequences.
In an experiment on a heavily-skewed sentiment analysis dataset, converting ranking results to classification labels yields an approximately 22% improvement over state-of-the-art text classification.
arXiv Detail & Related papers (2020-09-10T22:18:57Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.