Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach
- URL: http://arxiv.org/abs/2007.02739v1
- Date: Mon, 6 Jul 2020 13:19:26 GMT
- Title: Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach
- Authors: Georges Sfeir, Maya Abou-Zeid, Filipe Rodrigues, Francisco Camara
Pereira, Isam Kaysi
- Abstract summary: The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
- Score: 6.509758931804479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents a semi-nonparametric Latent Class Choice Model (LCCM)
with a flexible class membership component. The proposed model formulates the
latent classes using mixture models as an alternative approach to the
traditional random utility specification with the aim of comparing the two
approaches on various measures including prediction accuracy and representation
of heterogeneity in the choice process. Mixture models are parametric
model-based clustering techniques that have been widely used in areas such as
machine learning, data mining and patter recognition for clustering and
classification problems. An Expectation-Maximization (EM) algorithm is derived
for the estimation of the proposed model. Using two different case studies on
travel mode choice behavior, the proposed model is compared to traditional
discrete choice models on the basis of parameter estimates' signs, value of
time, statistical goodness-of-fit measures, and cross-validation tests. Results
show that mixture models improve the overall performance of latent class choice
models by providing better out-of-sample prediction accuracy in addition to
better representations of heterogeneity without weakening the behavioral and
economic interpretability of the choice models.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Stabilizing black-box model selection with the inflated argmax [8.52745154080651]
This paper presents a new approach to stabilizing model selection that leverages a combination of bagging and an "inflated" argmax operation.
Our method selects a small collection of models that all fit the data, and it is stable in that, with high probability, the removal of any training point will result in a collection of selected models that overlaps with the original collection.
In both settings, the proposed method yields stable and compact collections of selected models, outperforming a variety of benchmarks.
arXiv Detail & Related papers (2024-10-23T20:39:07Z) - The Interpolating Information Criterion for Overparameterized Models [49.283527214211446]
We show that the Interpolating Information Criterion is a measure of model quality that naturally incorporates the choice of prior into the model selection.
Our new information criterion accounts for prior misspecification, geometric and spectral properties of the model, and is numerically consistent with known empirical and theoretical behavior.
arXiv Detail & Related papers (2023-07-15T12:09:54Z) - Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation [24.65301562548798]
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation.
We conduct an empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work.
arXiv Detail & Related papers (2022-11-03T16:26:06Z) - fETSmcs: Feature-based ETS model component selection [8.99236558175168]
We propose an efficient approach for ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series.
We evaluate our approach on the widely used forecasting competition data set M4 in terms of both point forecasts and prediction intervals.
arXiv Detail & Related papers (2022-06-26T13:52:43Z) - Normalizing Flow based Hidden Markov Models for Classification of Speech
Phones with Explainability [25.543231171094384]
In pursuit of explainability, we develop generative models for sequential data.
We combine modern neural networks (normalizing flows) and traditional generative models (hidden Markov models - HMMs)
The proposed generative models can compute likelihood of a data and hence directly suitable for maximum-likelihood (ML) classification approach.
arXiv Detail & Related papers (2021-07-01T20:10:55Z) - Community Detection in the Stochastic Block Model by Mixed Integer
Programming [3.8073142980733]
Degree-Corrected Block Model (DCSBM) is a popular model to generate random graphs with community structure given an expected degree sequence.
Standard approach of community detection based on the DCSBM is to search for the model parameters that are the most likely to have produced the observed network data through maximum likelihood estimation (MLE)
We present mathematical programming formulations and exact solution methods that can provably find the model parameters and community assignments of maximum likelihood given an observed graph.
arXiv Detail & Related papers (2021-01-26T22:04:40Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.