Optimizing Ranking Systems Online as Bandits
- URL: http://arxiv.org/abs/2110.05807v1
- Date: Tue, 12 Oct 2021 08:07:46 GMT
- Title: Optimizing Ranking Systems Online as Bandits
- Authors: Chang Li
- Abstract summary: We study and propose solutions for four challenges in optimizing ranking systems online.
The effectiveness is related to how fast the algorithm learns from interactions.
Second, the deployed algorithm should be safe, which means the algorithm only displays reasonable content to user requests.
Third, as users change their preferences constantly, the algorithm should handle the nonstationarity.
- Score: 2.282430531490685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ranking system is the core part of modern retrieval and recommender systems,
where the goal is to rank candidate items given user contexts. Optimizing
ranking systems online means that the deployed system can serve user requests,
e.g., queries in the web search, and optimize the ranking policy by learning
from user interactions, e.g., clicks. Bandit is a general online learning
framework and can be used in our optimization task. However, due to the unique
features of ranking, there are several challenges in designing bandit
algorithms for ranking system optimization. In this dissertation, we study and
propose solutions for four challenges in optimizing ranking systems online:
effectiveness, safety, nonstationarity, and diversification. First, the
effectiveness is related to how fast the algorithm learns from interactions. We
study the effective online ranker evaluation task and propose the MergeDTS
algorithm to solve the problem effectively. Second, the deployed algorithm
should be safe, which means the algorithm only displays reasonable content to
user requests. To solve the safe online learning to rank problem, we propose
the BubbleRank algorithm. Third, as users change their preferences constantly,
the algorithm should handle the nonstationarity. We formulate this
nonstationary online learning to rank problem as cascade non-stationary bandits
and propose CascadeDUCB and CascadeSWUCB algorithms to solve the problem.
Finally, the contents in ranked lists should be diverse. We consider the
results diversification task and propose the CascadeHybird algorithm that
considers both the item relevance and results diversification when learning
from user interactions.
Related papers
- Adaptively Learning to Select-Rank in Online Platforms [34.258659206323664]
This research addresses the challenge of adaptively ranking items from a candidate pool for heterogeneous users.
We develop a user response model that considers diverse user preferences and the varying effects of item positions.
Experiments conducted on both simulated and real-world datasets demonstrate our algorithm outperforms the baseline.
arXiv Detail & Related papers (2024-06-07T15:33:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Fast online ranking with fairness of exposure [29.134493256287072]
We show that our algorithm is computationally fast, generating rankings on-the-fly with computation cost dominated by the sort operation, memory efficient, and has strong theoretical guarantees.
Compared to baseline policies that only maximize user-side performance, our algorithm allows to incorporate complex fairness of exposure criteria in the recommendations with negligible computational overhead.
arXiv Detail & Related papers (2022-09-13T12:35:36Z) - Which Tricks Are Important for Learning to Rank? [32.38701971636441]
State-of-the-art learning-to-rank methods are based on gradient-boosted decision trees (GBDT)
In this paper, we thoroughly analyze several GBDT-based ranking algorithms in a unified setup.
As a result, we gain insights into learning-to-rank techniques and obtain a new state-of-the-art algorithm.
arXiv Detail & Related papers (2022-04-04T13:59:04Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Exploration in two-stage recommender systems [79.50534282841618]
Two-stage recommender systems are widely adopted in industry due to their scalability and maintainability.
A key challenge of this setup is that optimal performance of each stage in isolation does not imply optimal global performance.
We propose a method of synchronising the exploration strategies between the ranker and the nominators.
arXiv Detail & Related papers (2020-09-01T16:52:51Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.