Modeling Censored Mobility Demand through Quantile Regression Neural
Networks
- URL: http://arxiv.org/abs/2104.01214v1
- Date: Fri, 2 Apr 2021 19:24:15 GMT
- Title: Modeling Censored Mobility Demand through Quantile Regression Neural
Networks
- Authors: Inon Peled, Filipe Rodrigues, Francisco C. Pereira
- Abstract summary: We show that CQRNN can estimate the intended distributions better than both censorship-unaware models and parametric censored models.
Results show that CQRNN can estimate the intended distributions better than both censorship-unaware models and parametric censored models.
- Score: 21.528321119061694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shared mobility services require accurate demand models for effective service
planning. On one hand, modeling the full probability distribution of demand is
advantageous, because the full uncertainty structure preserves valuable
information for decision making. On the other hand, demand is often observed
through usage of the service itself, so that the observations are censored, as
they are inherently limited by available supply. Since the 1980s, various works
on Censored Quantile Regression models have shown them to perform well under
such conditions, and in the last two decades, several works have proposed to
implement them flexibly through Neural Networks (CQRNN). However, apparently no
works have yet applied CQRNN in the Transport domain. We address this gap by
applying CQRNN to datasets from two shared mobility providers in the Copenhagen
metropolitan area in Denmark, as well as common synthetic baseline datasets.
The results show that CQRNN can estimate the intended distributions better than
both censorship-unaware models and parametric censored models.
Related papers
- Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation [56.79064699832383]
We establish a Cloud-Edge Elastic Model Adaptation (CEMA) paradigm in which the edge models only need to perform forward propagation.
In our CEMA, to reduce the communication burden, we devise two criteria to exclude unnecessary samples from uploading to the cloud.
arXiv Detail & Related papers (2024-02-27T08:47:19Z) - Neural Additive Models for Location Scale and Shape: A Framework for
Interpretable Neural Regression Beyond the Mean [1.0923877073891446]
Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks.
Despite this success, the inner workings of DNNs are often not transparent.
This lack of interpretability has led to increased research on inherently interpretable neural networks.
arXiv Detail & Related papers (2023-01-27T17:06:13Z) - Characterizing and Understanding the Behavior of Quantized Models for
Reliable Deployment [32.01355605506855]
Quantization-aware training can produce more stable models than standard, adversarial, and Mixup training.
Disagreements often have closer top-1 and top-2 output probabilities, and $Margin$ is a better indicator than the other uncertainty metrics to distinguish disagreements.
We opensource our code and models as a new benchmark for further studying the quantized models.
arXiv Detail & Related papers (2022-04-08T11:19:16Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - RethinkCWS: Is Chinese Word Segmentation a Solved Task? [81.11161697133095]
The performance of the Chinese Word (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks.
In this paper, we take stock of what we have achieved and rethink what's left in the CWS task.
arXiv Detail & Related papers (2020-11-13T11:07:08Z) - Interpretable Data-Driven Demand Modelling for On-Demand Transit
Services [6.982614422666432]
We developed trip and distribution models for on-demand transit (ODT) services at Dissemination areas (DA) level.
The results revealed that higher trip distribution levels are expected between dissemination areas with commercial/industrial land-use type and areas with high-density residential land-use.
arXiv Detail & Related papers (2020-10-27T20:48:10Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Estimating Latent Demand of Shared Mobility through Censored Gaussian
Processes [11.695095006311176]
Transport demand is highly dependent on supply, especially for shared transport services where availability is often limited.
As observed demand cannot be higher than available supply, historical transport data typically represents a biased, or censored, version of the true underlying demand pattern.
We propose a general method for censorship-aware demand modeling, for which we devise a censored likelihood function.
arXiv Detail & Related papers (2020-01-21T09:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.