Deep Learning for Choice Modeling
- URL: http://arxiv.org/abs/2208.09325v1
- Date: Fri, 19 Aug 2022 13:10:17 GMT
- Title: Deep Learning for Choice Modeling
- Authors: Zhongze Cai, Hanzhao Wang, Kalyan Talluri, Xiaocheng Li
- Abstract summary: We develop deep learning-based choice models under two settings of choice modeling: feature-free and feature-based.
Our model captures both the intrinsic utility for each candidate choice and the effect that the assortment has on the choice probability.
- Score: 5.173001988341294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Choice modeling has been a central topic in the study of individual
preference or utility across many fields including economics, marketing,
operations research, and psychology. While the vast majority of the literature
on choice models has been devoted to the analytical properties that lead to
managerial and policy-making insights, the existing methods to learn a choice
model from empirical data are often either computationally intractable or
sample inefficient. In this paper, we develop deep learning-based choice models
under two settings of choice modeling: (i) feature-free and (ii) feature-based.
Our model captures both the intrinsic utility for each candidate choice and the
effect that the assortment has on the choice probability. Synthetic and real
data experiments demonstrate the performances of proposed models in terms of
the recovery of the existing choice models, sample complexity, assortment
effect, architecture design, and model interpretation.
Related papers
- Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Subjectivity in Unsupervised Machine Learning Model Selection [2.9370710299422598]
This study uses the Hidden Markov Model as an example to investigate the subjectivity involved in model selection.
Sources of subjectivity include differing opinions on the importance of different criteria and metrics, differing views on how parsimonious a model should be, and how the size of a dataset should influence model selection.
arXiv Detail & Related papers (2023-09-01T01:40:58Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - A Statistical-Modelling Approach to Feedforward Neural Network Model Selection [0.8287206589886881]
Feedforward neural networks (FNNs) can be viewed as non-linear regression models.
A novel model selection method is proposed using the Bayesian information criterion (BIC) for FNNs.
The choice of BIC over out-of-sample performance leads to an increased probability of recovering the true model.
arXiv Detail & Related papers (2022-07-09T11:07:04Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Learning Dynamics Models for Model Predictive Agents [28.063080817465934]
Model-Based Reinforcement Learning involves learning a textitdynamics model from data, and then using this model to optimise behaviour.
This paper sets out to disambiguate the role of different design choices for learning dynamics models, by comparing their performance to planning with a ground-truth model.
arXiv Detail & Related papers (2021-09-29T09:50:25Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Model-specific Data Subsampling with Influence Functions [37.64859614131316]
We develop a model-specific data subsampling strategy that improves over random sampling whenever training points have varying influence.
Specifically, we leverage influence functions to guide our selection strategy, proving theoretically, and demonstrating empirically that our approach quickly selects high-quality models.
arXiv Detail & Related papers (2020-10-20T12:10:28Z) - Feature Selection Methods for Uplift Modeling and Heterogeneous
Treatment Effect [1.349645012479288]
Uplift modeling is a causal learning technique that estimates subgroup-level treatment effects.
Traditional methods for doing feature selection are not fit for the task.
We introduce a set of feature selection methods explicitly designed for uplift modeling.
arXiv Detail & Related papers (2020-05-05T00:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.