Sample-Rank: Weak Multi-Objective Recommendations Using Rejection
Sampling
- URL: http://arxiv.org/abs/2008.10277v1
- Date: Mon, 24 Aug 2020 09:17:18 GMT
- Title: Sample-Rank: Weak Multi-Objective Recommendations Using Rejection
Sampling
- Authors: Abhay Shukla, Jairaj Sathyanarayana, Dipyaman Banerjee
- Abstract summary: We introduce a method involving multi-goal sampling followed by ranking for user-relevance (Sample-Rank) to nudge recommendations towards multi-objective goals of the marketplace.
The proposed method's novelty is that it reduces the MO recommendation problem to sampling from a desired multi-goal distribution then using it to build a production-friendly learning-to-rank model.
- Score: 0.5156484100374059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online food ordering marketplaces are multi-stakeholder systems where
recommendations impact the experience and growth of each participant in the
system. A recommender system in this setting has to encapsulate the objectives
and constraints of different stakeholders in order to find utility of an item
for recommendation. Constrained-optimization based approaches to this problem
typically involve complex formulations and have high computational complexity
in production settings involving millions of entities. Simplifications and
relaxation techniques (for example, scalarization) help but introduce
sub-optimality and can be time-consuming due to the amount of tuning needed. In
this paper, we introduce a method involving multi-goal sampling followed by
ranking for user-relevance (Sample-Rank), to nudge recommendations towards
multi-objective (MO) goals of the marketplace. The proposed method's novelty is
that it reduces the MO recommendation problem to sampling from a desired
multi-goal distribution then using it to build a production-friendly
learning-to-rank (LTR) model. In offline experiments we show that we are able
to bias recommendations towards MO criteria with acceptable trade-offs in
metrics like AUC and NDCG. We also show results from a large-scale online A/B
experiment where this approach gave a statistically significant lift of 2.64%
in average revenue per order (RPO) (objective #1) with no drop in conversion
rate (CR) (objective #2) while holding the average last-mile traversed flat
(objective #3), vs. the baseline ranking method. This method also significantly
reduces time to model development and deployment in MO settings and allows for
trivial extensions to more objectives and other types of LTR models.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Improved Diversity-Promoting Collaborative Metric Learning for Recommendation [127.08043409083687]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2024-09-02T07:44:48Z) - DLCRec: A Novel Approach for Managing Diversity in LLM-Based Recommender Systems [9.433227503973077]
We propose a novel framework designed to enable fine-grained control over diversity in LLM-based recommendations.
Unlike traditional methods, DLCRec adopts a fine-grained task decomposition strategy, breaking down the recommendation process into three sub-tasks.
We introduce two data augmentation techniques that enhance the model's robustness to noisy and out-of-distribution data.
arXiv Detail & Related papers (2024-08-22T15:10:56Z) - DimeRec: A Unified Framework for Enhanced Sequential Recommendation via Generative Diffusion Models [39.49215596285211]
Sequential Recommendation (SR) plays a pivotal role in recommender systems by tailoring recommendations to user preferences based on their non-stationary historical interactions.
We propose a novel framework called DimeRec that combines a guidance extraction module (GEM) and a generative diffusion aggregation module (DAM)
Our numerical experiments demonstrate that DimeRec significantly outperforms established baseline methods across three publicly available datasets.
arXiv Detail & Related papers (2024-08-22T06:42:09Z) - Regression-aware Inference with LLMs [52.764328080398805]
We show that an inference strategy can be sub-optimal for common regression and scoring evaluation metrics.
We propose alternate inference strategies that estimate the Bayes-optimal solution for regression and scoring metrics in closed-form from sampled responses.
arXiv Detail & Related papers (2024-03-07T03:24:34Z) - Adaptive Neural Ranking Framework: Toward Maximized Business Goal for
Cascade Ranking Systems [33.46891569350896]
Cascade ranking is widely used for large-scale top-k selection problems in online advertising and recommendation systems.
Previous works on learning-to-rank usually focus on letting the model learn the complete order or top-k order.
We name this method as Adaptive Neural Ranking Framework (abbreviated as ARF)
arXiv Detail & Related papers (2023-10-16T14:43:02Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.