Stop Relying on No-Choice and Do not Repeat the Moves: Optimal,
Efficient and Practical Algorithms for Assortment Optimization
- URL: http://arxiv.org/abs/2402.18917v1
- Date: Thu, 29 Feb 2024 07:17:04 GMT
- Title: Stop Relying on No-Choice and Do not Repeat the Moves: Optimal,
Efficient and Practical Algorithms for Assortment Optimization
- Authors: Aadirupa Saha, Pierre Gaillard
- Abstract summary: We develop efficient algorithms for the problem of regret in assortment selection with emphPlackett Luce (PL) based user choices.
Our methods are practical, provably optimal, and devoid of the aforementioned limitations of the existing methods.
- Score: 38.57171985309975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of active online assortment optimization problem with
preference feedback, which is a framework for modeling user choices and
subsetwise utility maximization. The framework is useful in various real-world
applications including ad placement, online retail, recommender systems,
fine-tuning language models, amongst many. The problem, although has been
studied in the past, lacks an intuitive and practical solution approach with
simultaneously efficient algorithm and optimal regret guarantee. E.g.,
popularly used assortment selection algorithms often require the presence of a
`strong reference' which is always included in the choice sets, further they
are also designed to offer the same assortments repeatedly until the reference
item gets selected -- all such requirements are quite unrealistic for practical
applications. In this paper, we designed efficient algorithms for the problem
of regret minimization in assortment selection with \emph{Plackett Luce} (PL)
based user choices. We designed a novel concentration guarantee for estimating
the score parameters of the PL model using `\emph{Pairwise Rank-Breaking}',
which builds the foundation of our proposed algorithms. Moreover, our methods
are practical, provably optimal, and devoid of the aforementioned limitations
of the existing methods. Empirical evaluations corroborate our findings and
outperform the existing baselines.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - RIGA: A Regret-Based Interactive Genetic Algorithm [14.388696798649658]
We propose an interactive genetic algorithm for solving multi-objective optimization problems under preference imprecision.
Our algorithm, called RIGA, can be applied to any multi-objective optimization problem provided that the aggregation function is linear in its parameters.
For several performance indicators (computation times, gap to optimality and number of queries), RIGA obtains better results than state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-10T13:56:15Z) - Online Joint Assortment-Inventory Optimization under MNL Choices [14.530542487845732]
We study an online joint assortment-inventory optimization problem, in which we assume that the choice behavior of each customer follows the Multinomial Logit (MNL) choice model.
We propose a novel algorithm that can effectively balance the exploration and exploitation in the online decision-making of assortment and inventory.
arXiv Detail & Related papers (2023-04-04T09:25:34Z) - PASTA: Pessimistic Assortment Optimization [25.51792135903357]
We consider a class of assortment optimization problems in an offline data-driven setting.
We propose an algorithm referred to as Pessimistic ASsortment opTimizAtion (PASTA) based on the principle of pessimism.
arXiv Detail & Related papers (2023-02-08T01:11:51Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Opytimizer: A Nature-Inspired Python Optimizer [0.0]
It aims at selecting a feasible set of parameters in an attempt to solve a particular problem.
We propose a Python-based meta-heuristic optimization framework as Opyyy as Opheurisizer.
arXiv Detail & Related papers (2019-12-30T16:50:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.