A prediction and behavioural analysis of machine learning methods for
modelling travel mode choice
- URL: http://arxiv.org/abs/2301.04404v3
- Date: Tue, 12 Sep 2023 14:34:49 GMT
- Title: A prediction and behavioural analysis of machine learning methods for
modelling travel mode choice
- Authors: Jos\'e \'Angel Mart\'in-Baos, Julio Alberto L\'opez-G\'omez, Luis
Rodriguez-Benitez, Tim Hillel and Ricardo Garc\'ia-R\'odenas
- Abstract summary: We conduct a systematic comparison of different modelling approaches, across multiple modelling problems, in terms of the key factors likely to affect model choice.
Results indicate that the models with the highest disaggregate predictive performance provide poorer estimates of behavioural indicators and aggregate mode shares.
It is also observed that the MNL model performs robustly in a variety of situations, though ML techniques can improve the estimates of behavioural indices such as Willingness to Pay.
- Score: 0.26249027950824505
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The emergence of a variety of Machine Learning (ML) approaches for travel
mode choice prediction poses an interesting question to transport modellers:
which models should be used for which applications? The answer to this question
goes beyond simple predictive performance, and is instead a balance of many
factors, including behavioural interpretability and explainability,
computational complexity, and data efficiency. There is a growing body of
research which attempts to compare the predictive performance of different ML
classifiers with classical random utility models. However, existing studies
typically analyse only the disaggregate predictive performance, ignoring other
aspects affecting model choice. Furthermore, many studies are affected by
technical limitations, such as the use of inappropriate validation schemes,
incorrect sampling for hierarchical data, lack of external validation, and the
exclusive use of discrete metrics. We address these limitations by conducting a
systematic comparison of different modelling approaches, across multiple
modelling problems, in terms of the key factors likely to affect model choice
(out-of-sample predictive performance, accuracy of predicted market shares,
extraction of behavioural indicators, and computational efficiency). We combine
several real world datasets with synthetic datasets, where the data generation
function is known. The results indicate that the models with the highest
disaggregate predictive performance (namely extreme gradient boosting and
random forests) provide poorer estimates of behavioural indicators and
aggregate mode shares, and are more expensive to estimate, than other models,
including deep neural networks and Multinomial Logit (MNL). It is further
observed that the MNL model performs robustly in a variety of situations,
though ML techniques can improve the estimates of behavioural indices such as
Willingness to Pay.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - An Experimental Study on the Rashomon Effect of Balancing Methods in Imbalanced Classification [0.0]
This paper examines the impact of balancing methods on predictive multiplicity using the Rashomon effect.
It is crucial because the blind model selection in data-centric AI is risky from a set of approximately equally accurate models.
arXiv Detail & Related papers (2024-03-22T13:08:22Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Towards Better Modeling with Missing Data: A Contrastive Learning-based
Visual Analytics Perspective [7.577040836988683]
Missing data can pose a challenge for machine learning (ML) modeling.
Current approaches are categorized into feature imputation and label prediction.
This study proposes a Contrastive Learning framework to model observed data with missing values.
arXiv Detail & Related papers (2023-09-18T13:16:24Z) - Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation [24.65301562548798]
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation.
We conduct an empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work.
arXiv Detail & Related papers (2022-11-03T16:26:06Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - A non-asymptotic penalization criterion for model selection in mixture
of experts models [1.491109220586182]
We consider the Gaussian-gated localized MoE (GLoME) regression model for modeling heterogeneous data.
This model poses challenging questions with respect to the statistical estimation and model selection problems.
We study the problem of estimating the number of components of the GLoME model, in a penalized maximum likelihood estimation framework.
arXiv Detail & Related papers (2021-04-06T16:24:55Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - On Statistical Efficiency in Learning [37.08000833961712]
We address the challenge of model selection to strike a balance between model fitting and model complexity.
We propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce cost.
Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.
arXiv Detail & Related papers (2020-12-24T16:08:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.