Optimizing Offer Sets in Sub-Linear Time
- URL: http://arxiv.org/abs/2011.08606v1
- Date: Tue, 17 Nov 2020 13:02:56 GMT
- Title: Optimizing Offer Sets in Sub-Linear Time
- Authors: Vivek F. Farias, Andrew A. Li, and Deeksha Sinha
- Abstract summary: We propose an algorithm for personalized offer set optimization that runs in time sub-linear in the number of items.
Our algorithm can be entirely data-driven, relying on samples of the user, where a sample' refers to the user interaction data typically collected by firms.
- Score: 5.027714423258537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalization and recommendations are now accepted as core competencies in
just about every online setting, ranging from media platforms to e-commerce to
social networks. While the challenge of estimating user preferences has
garnered significant attention, the operational problem of using such
preferences to construct personalized offer sets to users is still a challenge,
particularly in modern settings where a massive number of items and a
millisecond response time requirement mean that even enumerating all of the
items is impossible. Faced with such settings, existing techniques are either
(a) entirely heuristic with no principled justification, or (b) theoretically
sound, but simply too slow to work.
Thus motivated, we propose an algorithm for personalized offer set
optimization that runs in time sub-linear in the number of items while enjoying
a uniform performance guarantee. Our algorithm works for an extremely general
class of problems and models of user choice that includes the mixed multinomial
logit model as a special case. We achieve a sub-linear runtime by leveraging
the dimensionality reduction from learning an accurate latent factor model,
along with existing sub-linear time approximate near neighbor algorithms. Our
algorithm can be entirely data-driven, relying on samples of the user, where a
`sample' refers to the user interaction data typically collected by firms. We
evaluate our approach on a massive content discovery dataset from Outbrain that
includes millions of advertisements. Results show that our implementation
indeed runs fast and with increased performance relative to existing fast
heuristics.
Related papers
- Stop Relying on No-Choice and Do not Repeat the Moves: Optimal,
Efficient and Practical Algorithms for Assortment Optimization [38.57171985309975]
We develop efficient algorithms for the problem of regret in assortment selection with emphPlackett Luce (PL) based user choices.
Our methods are practical, provably optimal, and devoid of the aforementioned limitations of the existing methods.
arXiv Detail & Related papers (2024-02-29T07:17:04Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Sample-Efficient Personalization: Modeling User Parameters as Low Rank
Plus Sparse Components [30.32486162748558]
Personalization of machine learning (ML) predictions for individual users/domains/enterprises is critical for practical recommendation systems.
We propose a novel meta-learning style approach that models network weights as a sum of low-rank and sparse components.
We show that AMHT-LRS solves the problem efficiently with nearly optimal sample complexity.
arXiv Detail & Related papers (2022-10-07T12:50:34Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Differentially Private Query Release Through Adaptive Projection [19.449593001368193]
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals.
Our algorithm makes adaptive use of a continuous relaxation of the Projection Mechanism, which answers queries on the private dataset using simple perturbation.
We find that our method outperforms existing algorithms in many cases, especially when the privacy budget is small or the query class is large.
arXiv Detail & Related papers (2021-03-11T12:43:18Z) - Learning User Preferences in Non-Stationary Environments [42.785926822853746]
We introduce a novel model for online non-stationary recommendation systems.
We show that our algorithm outperforms other static algorithms even when preferences do not change over time.
arXiv Detail & Related papers (2021-01-29T10:26:16Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.