PASTO: Strategic Parameter Optimization in Recommendation Systems --
Probabilistic is Better than Deterministic
- URL: http://arxiv.org/abs/2108.09076v1
- Date: Fri, 20 Aug 2021 09:02:58 GMT
- Title: PASTO: Strategic Parameter Optimization in Recommendation Systems --
Probabilistic is Better than Deterministic
- Authors: Weicong Ding, Hanlin Tang, Jingshuo Feng, Lei Yuan, Sen Yang, Guangxu
Yang, Jie Zheng, Jing Wang, Qiang Su, Dong Zheng, Xuezhong Qiu, Yongqi Liu,
Yuxuan Chen, Yang Liu, Chao Song, Dongying Kong, Kai Ren, Peng Jiang, Qiao
Lian, Ji Liu
- Abstract summary: We show that a probabilistic strategic parameter regime can achieve better value compared to the standard regime of finding a single deterministic parameter.
Our approach is applied in a popular social network platform with hundreds of millions of daily users.
- Score: 33.174973495620215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world recommendation systems often consist of two phases. In the first
phase, multiple predictive models produce the probability of different
immediate user actions. In the second phase, these predictions are aggregated
according to a set of 'strategic parameters' to meet a diverse set of business
goals, such as longer user engagement, higher revenue potential, or more
community/network interactions. In addition to building accurate predictive
models, it is also crucial to optimize this set of 'strategic parameters' so
that primary goals are optimized while secondary guardrails are not hurt. In
this setting with multiple and constrained goals, this paper discovers that a
probabilistic strategic parameter regime can achieve better value compared to
the standard regime of finding a single deterministic parameter. The new
probabilistic regime is to learn the best distribution over strategic parameter
choices and sample one strategic parameter from the distribution when each user
visits the platform. To pursue the optimal probabilistic solution, we formulate
the problem into a stochastic compositional optimization problem, in which the
unbiased stochastic gradient is unavailable. Our approach is applied in a
popular social network platform with hundreds of millions of daily users and
achieves +0.22% lift of user engagement in a recommendation task and +1.7% lift
in revenue in an advertising optimization scenario comparing to using the best
deterministic parameter strategy.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - There is No Silver Bullet: Benchmarking Methods in Predictive Combinatorial Optimization [59.27851754647913]
Predictive optimization is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising.
There is no systematic benchmark of both approaches, including the specific design choices at the module level.
Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO.
arXiv Detail & Related papers (2023-11-13T13:19:34Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Opportunistic Qualitative Planning in Stochastic Systems with Incomplete
Preferences over Reachability Objectives [24.11353445650682]
Preferences play a key role in determining what goals/constraints to satisfy when not all constraints can be satisfied simultaneously.
We present an algorithm to synthesize the SPI and SASI strategies that induce multiple sequential improvements.
arXiv Detail & Related papers (2022-10-04T19:53:08Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Improving Hyperparameter Optimization by Planning Ahead [3.8673630752805432]
We propose a novel transfer learning approach, defined within the context of model-based reinforcement learning.
We propose a new variant of model predictive control which employs a simple look-ahead strategy as a policy.
Our experiments on three meta-datasets comparing to state-of-the-art HPO algorithms show that the proposed method can outperform all baselines.
arXiv Detail & Related papers (2021-10-15T11:46:14Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Mixed Strategies for Robust Optimization of Unknown Objectives [93.8672371143881]
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
We design a novel sample-efficient algorithm GP-MRO, which sequentially learns about the unknown objective from noisy point evaluations.
GP-MRO seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value.
arXiv Detail & Related papers (2020-02-28T09:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.