RankList -- A Listwise Preference Learning Framework for Predicting Subjective Preferences
- URL: http://arxiv.org/abs/2508.09826v1
- Date: Wed, 13 Aug 2025 13:59:41 GMT
- Title: RankList -- A Listwise Preference Learning Framework for Predicting Subjective Preferences
- Authors: Abinay Reddy Naini, Fernando Diaz, Carlos Busso,
- Abstract summary: We propose RankList, a listwise preference learning framework that generalizes RankNet to structured list-level supervision.<n>Our formulation explicitly models local and non-local ranking constraints within a probabilistic framework.<n>Extensive experiments demonstrate the superiority of our method across diverse modalities.
- Score: 66.76322360727809
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Preference learning has gained significant attention in tasks involving subjective human judgments, such as \emph{speech emotion recognition} (SER) and image aesthetic assessment. While pairwise frameworks such as RankNet offer robust modeling of relative preferences, they are inherently limited to local comparisons and struggle to capture global ranking consistency. To address these limitations, we propose RankList, a novel listwise preference learning framework that generalizes RankNet to structured list-level supervision. Our formulation explicitly models local and non-local ranking constraints within a probabilistic framework. The paper introduces a log-sum-exp approximation to improve training efficiency. We further extend RankList with skip-wise comparisons, enabling progressive exposure to complex list structures and enhancing global ranking fidelity. Extensive experiments demonstrate the superiority of our method across diverse modalities. On benchmark SER datasets (MSP-Podcast, IEMOCAP, BIIC Podcast), RankList achieves consistent improvements in Kendall's Tau and ranking accuracy compared to standard listwise baselines. We also validate our approach on aesthetic image ranking using the Artistic Image Aesthetics dataset, highlighting its broad applicability. Through ablation and cross-domain studies, we show that RankList not only improves in-domain ranking but also generalizes better across datasets. Our framework offers a unified, extensible approach for modeling ordered preferences in subjective learning scenarios.
Related papers
- RewardRank: Optimizing True Learning-to-Rank Utility [28.662272762911325]
We introduce RewardRank, a data-driven learning-to-rank framework for counterfactual utility.<n>Our results show that learning-to-rank can be reformulated as direct optimization of counterfactual utility.
arXiv Detail & Related papers (2025-08-19T18:08:35Z) - In-context Ranking Preference Optimization [65.5489745857577]
We propose an In-context Ranking Preference Optimization (IRPO) framework to optimize large language models (LLMs) based on ranking lists constructed during inference.<n>We show IRPO outperforms standard DPO approaches in ranking performance, highlighting its effectiveness in aligning LLMs with direct in-context ranking preferences.
arXiv Detail & Related papers (2025-04-21T23:06:12Z) - Rankformer: A Graph Transformer for Recommendation based on Ranking Objective [27.953113185360174]
We propose Rankformer, a ranking-inspired recommendation model.<n>The architecture is inspired by the gradient of the ranking objective, embodying a unique (graph) transformer architecture.<n>Extensive experimental results demonstrate that Rankformer outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-03-21T07:53:06Z) - RankPO: Preference Optimization for Job-Talent Matching [7.385902340910447]
We propose a two-stage training framework for large language models (LLMs)<n>In the first stage, a contrastive learning approach is used to train the model on a dataset constructed from real-world matching rules.<n>In the second stage, we introduce a novel preference-based fine-tuning method inspired by Direct Preference Optimization (DPO) to align the model with AI-curated pairwise preferences.
arXiv Detail & Related papers (2025-03-13T10:14:37Z) - TSPRank: Bridging Pairwise and Listwise Methods with a Bilinear Travelling Salesman Model [19.7255072094322]
Travelling Salesman Problem Rank (TSPRank) is a hybrid pairwise-listwise ranking method.<n>TSPRank's main advantage over existing methods is its ability to harness global information better while ranking.
arXiv Detail & Related papers (2024-11-18T21:10:14Z) - Self-Calibrated Listwise Reranking with Large Language Models [137.6557607279876]
Large language models (LLMs) have been employed in reranking tasks through a sequence-to-sequence approach.
This reranking paradigm requires a sliding window strategy to iteratively handle larger candidate sets.
We propose a novel self-calibrated listwise reranking method, which aims to leverage LLMs to produce global relevance scores for ranking.
arXiv Detail & Related papers (2024-11-07T10:31:31Z) - RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models [48.895510739010355]
We present three key contributions to address this gap.<n>First, we rigorously define listwise feature attribution for ranking models.<n>Second, we introduce RankingSHAP, extending the popular SHAP framework to accommodate listwise ranking attribution.<n>Third, we propose two novel evaluation paradigms for assessing the faithfulness of attributions in learning-to-rank models.
arXiv Detail & Related papers (2024-03-24T10:45:55Z) - Replace Scoring with Arrangement: A Contextual Set-to-Arrangement
Framework for Learning-to-Rank [40.81502990315285]
Learning-to-rank is a core technique in the top-N recommendation task, where an ideal ranker would be a mapping from an item set to an arrangement.
Most existing solutions fall in the paradigm of probabilistic ranking principle (PRP), i.e., first score each item in the candidate set and then perform a sort operation to generate the top ranking list.
We propose Set-To-Arrangement Ranking (STARank), a new framework directly generates the permutations of the candidate items without the need for individually scoring and sort operations.
arXiv Detail & Related papers (2023-08-05T12:22:26Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.