Combinatorial Inference on the Optimal Assortment in Multinomial Logit
Models
- URL: http://arxiv.org/abs/2301.12254v4
- Date: Thu, 4 May 2023 00:29:37 GMT
- Title: Combinatorial Inference on the Optimal Assortment in Multinomial Logit
Models
- Authors: Shuting Shen, Xi Chen, Ethan X. Fang, Junwei Lu
- Abstract summary: Decision-makers may only be interested in testing whether a given property holds true for the optimal assortment.
This paper proposes a novel inferential framework for testing such properties.
- Score: 14.689897325621672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assortment optimization has received active explorations in the past few
decades due to its practical importance. Despite the extensive literature
dealing with optimization algorithms and latent score estimation, uncertainty
quantification for the optimal assortment still needs to be explored and is of
great practical significance. Instead of estimating and recovering the complete
optimal offer set, decision-makers may only be interested in testing whether a
given property holds true for the optimal assortment, such as whether they
should include several products of interest in the optimal set, or how many
categories of products the optimal set should include. This paper proposes a
novel inferential framework for testing such properties. We consider the widely
adopted multinomial logit (MNL) model, where we assume that each customer will
purchase an item within the offered products with a probability proportional to
the underlying preference score associated with the product. We reduce
inferring a general optimal assortment property to quantifying the uncertainty
associated with the sign change point detection of the marginal revenue gaps.
We show the asymptotic normality of the marginal revenue gap estimator, and
construct a maximum statistic via the gap estimators to detect the sign change
point. By approximating the distribution of the maximum statistic with
multiplier bootstrap techniques, we propose a valid testing procedure. We also
conduct numerical experiments to assess the performance of our method.
Related papers
- Deep Bayesian Active Learning for Preference Modeling in Large Language Models [84.817400962262]
We propose the Bayesian Active Learner for Preference Modeling (BAL-PM) for Preference Modeling.
BAL-PM requires 33% to 68% fewer preference labels in two popular human preference datasets and exceeds previous Bayesian acquisition policies.
Our experiments demonstrate that BAL-PM requires 33% to 68% fewer preference labels in two popular human preference datasets and exceeds previous Bayesian acquisition policies.
arXiv Detail & Related papers (2024-06-14T13:32:43Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization [0.0]
We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
arXiv Detail & Related papers (2023-09-20T08:48:50Z) - Density Ratio Estimation-based Bayesian Optimization with
Semi-Supervised Learning [5.346298077607419]
We propose density ratio estimation-based Bayesian optimization with semi-supervised learning.
We demonstrate the experimental results of our methods and several baseline methods in two distinct scenarios with unlabeled point sampling and a fixed-size pool.
arXiv Detail & Related papers (2023-05-24T23:01:56Z) - Bayesian Optimization with Conformal Prediction Sets [44.565812181545645]
Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models.
We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity.
In many cases we find that query coverage can be significantly improved without harming sample-efficiency.
arXiv Detail & Related papers (2022-10-22T17:01:05Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Test Score Algorithms for Budgeted Stochastic Utility Maximization [12.360522095604983]
We extend an existing scoring mechanism, namely the replication test scores, to incorporate heterogeneous item costs as well as item values.
Our algorithms and approximation guarantees assume that test scores are noisy estimates of certain expected values.
We show how our algorithm can be adapted to the setting where items arrive in a fashion while maintaining the same approximation guarantee.
arXiv Detail & Related papers (2020-12-30T15:28:41Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.