A Neural-embedded Choice Model: TasteNet-MNL Modeling Taste
Heterogeneity with Flexibility and Interpretability
- URL: http://arxiv.org/abs/2002.00922v2
- Date: Fri, 1 Jul 2022 17:42:06 GMT
- Title: A Neural-embedded Choice Model: TasteNet-MNL Modeling Taste
Heterogeneity with Flexibility and Interpretability
- Authors: Yafei Han, Francisco Camara Pereira, Moshe Ben-Akiva, Christopher
Zegras
- Abstract summary: Discrete choice models (DCMs) require a priori knowledge of the utility functions, especially how tastes vary across individuals.
In this paper, we utilize a neural network to learn taste representation.
We show that TasteNet-MNL reaches the ground-truth model's predictability and recovers the nonlinear taste functions on synthetic data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrete choice models (DCMs) require a priori knowledge of the utility
functions, especially how tastes vary across individuals. Utility
misspecification may lead to biased estimates, inaccurate interpretations and
limited predictability. In this paper, we utilize a neural network to learn
taste representation. Our formulation consists of two modules: a neural network
(TasteNet) that learns taste parameters (e.g., time coefficient) as flexible
functions of individual characteristics; and a multinomial logit (MNL) model
with utility functions defined with expert knowledge. Taste parameters learned
by the neural network are fed into the choice model and link the two modules.
Our approach extends the L-MNL model (Sifringer et al., 2020) by allowing the
neural network to learn the interactions between individual characteristics and
alternative attributes. Moreover, we formalize and strengthen the
interpretability condition - requiring realistic estimates of behavior
indicators (e.g., value-of-time, elasticity) at the disaggregated level, which
is crucial for a model to be suitable for scenario analysis and policy
decisions. Through a unique network architecture and parameter transformation,
we incorporate prior knowledge and guide the neural network to output realistic
behavior indicators at the disaggregated level. We show that TasteNet-MNL
reaches the ground-truth model's predictability and recovers the nonlinear
taste functions on synthetic data. Its estimated value-of-time and choice
elasticities at the individual level are close to the ground truth. On a
publicly available Swissmetro dataset, TasteNet-MNL outperforms benchmarking
MNLs and Mixed Logit model's predictability. It learns a broader spectrum of
taste variations within the population and suggests a higher average
value-of-time.
Related papers
- WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - Discrete-Choice Model with Generalized Additive Utility Network [0.0]
Multinomial logit models (MNLs) with linear utility functions have been used in practice because they are ease to use and interpretable.
We developed utility functions with a novel neural-network architecture based on generalized additive models.
Our models were comparable to ASU-DNN in accuracy and exhibited improved interpretability compared to previous models.
arXiv Detail & Related papers (2023-09-29T04:40:01Z) - The Contextual Lasso: Sparse Linear Models via Deep Neural Networks [5.607237982617641]
We develop a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features.
An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso.
arXiv Detail & Related papers (2023-02-02T05:00:29Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - It's FLAN time! Summing feature-wise latent representations for
interpretability [0.0]
We propose a novel class of structurally-constrained neural networks, which we call FLANs (Feature-wise Latent Additive Networks)
FLANs process each input feature separately, computing for each of them a representation in a common latent space.
These feature-wise latent representations are then simply summed, and the aggregated representation is used for prediction.
arXiv Detail & Related papers (2021-06-18T12:19:33Z) - Flexible, Non-parametric Modeling Using Regularized Neural Networks [0.0]
PrAda-net is a one hidden layer neural network, trained with proximal gradient descent and adaptive lasso.
It automatically adjusts the size and architecture of the neural network to capture the structure of the underlying data generative model.
We demonstrate PrAda-net on simulated data, where we compare the test error performance, variable importance and variable subset identification properties.
We also apply Prada-net to the massive U.K. black smoke data set, to demonstrate the capability of using Prada-net as an alternative to GAMs.
arXiv Detail & Related papers (2020-12-18T08:49:04Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.