A Neural Network Based Choice Model for Assortment Optimization
- URL: http://arxiv.org/abs/2308.05617v1
- Date: Thu, 10 Aug 2023 15:01:52 GMT
- Title: A Neural Network Based Choice Model for Assortment Optimization
- Authors: Hanzhao Wang, Zhongze Cai, Xiaocheng Li, Kalyan Talluri
- Abstract summary: We investigate whether a single neural network architecture can predict purchase probabilities for datasets from various contexts.
Next, we develop an assortment optimization formulation that is solvable by off-the-shelf integer programming solvers.
- Score: 5.173001988341294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrete-choice models are used in economics, marketing and revenue
management to predict customer purchase probabilities, say as a function of
prices and other features of the offered assortment. While they have been shown
to be expressive, capturing customer heterogeneity and behaviour, they are also
hard to estimate, often based on many unobservables like utilities; and
moreover, they still fail to capture many salient features of customer
behaviour. A natural question then, given their success in other contexts, is
if neural networks can eliminate the necessity of carefully building a
context-dependent customer behaviour model and hand-coding and tuning the
estimation. It is unclear however how one would incorporate assortment effects
into such a neural network, and also how one would optimize the assortment with
such a black-box generative model of choice probabilities. In this paper we
investigate first whether a single neural network architecture can predict
purchase probabilities for datasets from various contexts and generated under
various models and assumptions. Next, we develop an assortment optimization
formulation that is solvable by off-the-shelf integer programming solvers. We
compare against a variety of benchmark discrete-choice models on simulated as
well as real-world datasets, developing training tricks along the way to make
the neural network prediction and subsequent optimization robust and comparable
in performance to the alternates.
Related papers
- Transformer Choice Net: A Transformer Neural Network for Choice
Prediction [6.6543199581017625]
We develop a neural network architecture, the Transformer Choice Net, that is suitable for predicting multiple choices.
Transformer networks turn out to be especially suitable for this task as they take into account not only the features of the customer and the items but also the context.
Our architecture shows uniformly superior out-of-sample prediction performance compared to the leading models in the literature.
arXiv Detail & Related papers (2023-10-12T20:54:10Z) - Adaptive Conditional Quantile Neural Processes [9.066817971329899]
Conditional Quantile Neural Processes (CQNPs) are a new member of the neural processes family.
We introduce an extension of quantile regression where the model learns to focus on estimating informative quantiles.
Experiments with real and synthetic datasets demonstrate substantial improvements in predictive performance.
arXiv Detail & Related papers (2023-05-30T06:19:19Z) - The Contextual Lasso: Sparse Linear Models via Deep Neural Networks [5.607237982617641]
We develop a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features.
An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso.
arXiv Detail & Related papers (2023-02-02T05:00:29Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Demystifying Randomly Initialized Networks for Evaluating Generative
Models [28.8899914083501]
Evaluation of generative models is mostly based on the comparison between the estimated distribution and the ground truth distribution in a certain feature space.
To embed samples into informative features, previous works often use convolutional neural networks optimized for classification.
In this paper, we rigorously investigate the feature space of models with random weights in comparison to that of trained models.
arXiv Detail & Related papers (2022-08-19T08:43:53Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.