Multi-Label Learning to Rank through Multi-Objective Optimization
- URL: http://arxiv.org/abs/2207.03060v2
- Date: Fri, 8 Jul 2022 16:30:43 GMT
- Title: Multi-Label Learning to Rank through Multi-Objective Optimization
- Authors: Debabrata Mahapatra, Chaosheng Dong, Yetian Chen, Deqiang Meng,
Michinari Momma
- Abstract summary: Learning to Rank technique is ubiquitous in the Information Retrieval system nowadays.
To resolve ambiguity, it is desirable to train a model using many relevance criteria.
We propose a general framework where the information from labels can be combined in a variety of ways to characterize the trade-off among the goals.
- Score: 9.099663022952496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to Rank (LTR) technique is ubiquitous in the Information Retrieval
system nowadays, especially in the Search Ranking application. The query-item
relevance labels typically used to train the ranking model are often noisy
measurements of human behavior, e.g., product rating for product search. The
coarse measurements make the ground truth ranking non-unique with respect to a
single relevance criterion. To resolve ambiguity, it is desirable to train a
model using many relevance criteria, giving rise to Multi-Label LTR (MLLTR).
Moreover, it formulates multiple goals that may be conflicting yet important to
optimize for simultaneously, e.g., in product search, a ranking model can be
trained based on product quality and purchase likelihood to increase revenue.
In this research, we leverage the Multi-Objective Optimization (MOO) aspect of
the MLLTR problem and employ recently developed MOO algorithms to solve it.
Specifically, we propose a general framework where the information from labels
can be combined in a variety of ways to meaningfully characterize the trade-off
among the goals. Our framework allows for any gradient based MOO algorithm to
be used for solving the MLLTR problem. We test the proposed framework on two
publicly available LTR datasets and one e-commerce dataset to show its
efficacy.
Related papers
- REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark [16.55516587540082]
We introduce REAL-MM-RAG, an automatically generated benchmark designed to address four key properties essential for real-world retrieval.
We propose a multi-difficulty-level scheme based on query rephrasing to evaluate models' semantic understanding beyond keyword matching.
Our benchmark reveals significant model weaknesses, particularly in handling table-heavy documents and robustness to query rephrasing.
arXiv Detail & Related papers (2025-02-17T22:10:47Z) - Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels [64.94853276821992]
Large multimodal models (LMMs) are increasingly deployed across diverse applications.
Traditional evaluation methods are largely dataset-centric, relying on fixed, labeled datasets and supervised metrics.
We explore unsupervised model ranking for LMMs by leveraging their uncertainty signals, such as softmax probabilities.
arXiv Detail & Related papers (2024-12-09T13:05:43Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the ability to capture semantic relationships between items, independent of their popularity.
We introduce LLMEmb, a novel method leveraging LLM to generate item embeddings that enhance Sequential Recommender Systems (SRS) performance.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - CROSS-JEM: Accurate and Efficient Cross-encoders for Short-text Ranking Tasks [12.045202648316678]
Transformer-based ranking models are the state-of-the-art approaches for such tasks.
We propose Cross-encoders with Joint Efficient Modeling (CROSS-JEM)
CROSS-JEM enables transformer-based models to jointly score multiple items for a query.
It achieves state-of-the-art accuracy and over 4x lower ranking latency over standard cross-encoders.
arXiv Detail & Related papers (2024-09-15T17:05:35Z) - Large Language Models for Relevance Judgment in Product Search [48.56992980315751]
High relevance of retrieved and re-ranked items to the search query is the cornerstone of successful product search.
We present an array of techniques for leveraging Large Language Models (LLMs) for automating the relevance judgment of query-item pairs (QIPs) at scale.
Our findings have immediate implications for the growing field of relevance judgment automation in product search.
arXiv Detail & Related papers (2024-06-01T00:52:41Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Sample-Rank: Weak Multi-Objective Recommendations Using Rejection
Sampling [0.5156484100374059]
We introduce a method involving multi-goal sampling followed by ranking for user-relevance (Sample-Rank) to nudge recommendations towards multi-objective goals of the marketplace.
The proposed method's novelty is that it reduces the MO recommendation problem to sampling from a desired multi-goal distribution then using it to build a production-friendly learning-to-rank model.
arXiv Detail & Related papers (2020-08-24T09:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.