Attitudes and Latent Class Choice Models using Machine learning
- URL: http://arxiv.org/abs/2302.09871v1
- Date: Mon, 20 Feb 2023 10:03:01 GMT
- Title: Attitudes and Latent Class Choice Models using Machine learning
- Authors: Lorena Torres Lahoz (1), Francisco Camara Pereira (1), Georges Sfeir
(1), Ioanna Arkoudi (1), Mayara Moraes Monteiro (1), Carlos Lima Azevedo (1)
((1) DTU Management, Technical University of Denmark)
- Abstract summary: We present a method of efficiently incorporating attitudinal indicators in the specification of Latent Class Choice Models (LCCM)
This formulation overcomes structural equations in its capability of exploring the relationship between the attitudinal indicators and the decision choice.
We test our proposed framework for estimating a Car-Sharing (CS) service subscription choice with stated preference data from Copenhagen, Denmark.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Latent Class Choice Models (LCCM) are extensions of discrete choice models
(DCMs) that capture unobserved heterogeneity in the choice process by
segmenting the population based on the assumption of preference similarities.
We present a method of efficiently incorporating attitudinal indicators in the
specification of LCCM, by introducing Artificial Neural Networks (ANN) to
formulate latent variables constructs. This formulation overcomes structural
equations in its capability of exploring the relationship between the
attitudinal indicators and the decision choice, given the Machine Learning (ML)
flexibility and power in capturing unobserved and complex behavioural features,
such as attitudes and beliefs. All of this while still maintaining the
consistency of the theoretical assumptions presented in the Generalized Random
Utility model and the interpretability of the estimated parameters. We test our
proposed framework for estimating a Car-Sharing (CS) service subscription
choice with stated preference data from Copenhagen, Denmark. The results show
that our proposed approach provides a complete and realistic segmentation,
which helps design better policies.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Revisiting Demonstration Selection Strategies in In-Context Learning [66.11652803887284]
Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL)
In this work, we first revisit the factors contributing to this variance from both data and model aspects, and find that the choice of demonstration is both data- and model-dependent.
We propose a data- and model-dependent demonstration selection method, textbfTopK + ConE, based on the assumption that textitthe performance of a demonstration positively correlates with its contribution to the model's understanding of the test samples.
arXiv Detail & Related papers (2024-01-22T16:25:27Z) - Incorporating Domain Knowledge in Deep Neural Networks for Discrete
Choice Models [0.5801044612920815]
This paper proposes a framework that expands the potential of data-driven approaches for DCM.
It includes pseudo data samples that represent required relationships and a loss function that measures their fulfillment.
A case study demonstrates the potential of this framework for discrete choice analysis.
arXiv Detail & Related papers (2023-05-30T12:53:55Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis
Expansion [2.198430261120653]
Disentanglement learning aims to construct independent and interpretable latent variables in which generative models are a popular strategy.
We propose a novel GAN-based disentanglement framework via embedding Orthogonal Basis Expansion (OBE) into InfoGAN network.
Our Inference-InfoGAN achieves higher disentanglement score in terms of FactorVAE, Separated ferenceAttribute Predictability (SAP), Mutual Information Gap (MIG) and Variation Predictability (VP) metrics without model fine-tuning.
arXiv Detail & Related papers (2021-10-02T11:54:23Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - Gaussian Process Latent Class Choice Models [7.992550355579791]
We present a non-parametric class of probabilistic machine learning within discrete choice models (DCMs)
The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs.
The model is tested on two different mode choice applications and compared against different LCCM benchmarks.
arXiv Detail & Related papers (2021-01-28T19:56:42Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.