CGAN-EB: A Non-parametric Empirical Bayes Method for Crash Hotspot
Identification Using Conditional Generative Adversarial Networks: A Simulated
Crash Data Study
- URL: http://arxiv.org/abs/2112.06925v1
- Date: Mon, 13 Dec 2021 16:02:47 GMT
- Title: CGAN-EB: A Non-parametric Empirical Bayes Method for Crash Hotspot
Identification Using Conditional Generative Adversarial Networks: A Simulated
Crash Data Study
- Authors: Mohammad Zarei, Bruce Hellinga, Pedram Izadpanah
- Abstract summary: A new non-parametric empirical Bayes approach called CGAN-EB is proposed for approximating empirical Bayes (EB) estimates in traffic locations.
Its performance is compared in a simulation study with the traditional approach based on negative binomial model (NB-EB)
- Score: 2.3204178451683264
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, a new non-parametric empirical Bayes approach called CGAN-EB
is proposed for approximating empirical Bayes (EB) estimates in traffic
locations (e.g., road segments) which benefits from the modeling advantages of
deep neural networks, and its performance is compared in a simulation study
with the traditional approach based on negative binomial model (NB-EB). The
NB-EB uses negative binomial model in order to model the crash data and is the
most common approach in practice. To model the crash data in the proposed
CGAN-EB, conditional generative adversarial network is used, which is a
powerful deep neural network based method that can model any types of
distributions. A number of simulation experiments are designed and conducted to
evaluate the CGAN-EB performance in different conditions and compare it with
the NB-EB. The results show that CGAN-EB performs as well as NB-EB when
conditions favor the NB-EB model (i.e. data conform to the assumptions of the
NB model) and outperforms NB-EB in experiments reflecting conditions frequently
encountered in practice, specifically low sample means, and when crash
frequency does not follow a log-linear relationship with covariates.
Related papers
- Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models [5.952993835541411]
We show how TabPFN can be used as pre-trained autoregressive conditional density estimators for simulation-based inference.
NPE-PF eliminates the need for inference network selection, training, and hyper parameter tuning.
It exhibits superior robustness to model misspecification and can be scaled to simulation budgets that exceed the context size limit of TabPFN.
arXiv Detail & Related papers (2025-04-24T15:29:39Z) - Efficient Membership Inference Attacks by Bayesian Neural Network [12.404604217229101]
Membership Inference Attacks (MIAs) aim to estimate whether a specific data point was used in the training of a given model.
We propose a novel approach - Bayesian Membership Inference Attack (BMIA), which performs conditional attack through Bayesian inference.
arXiv Detail & Related papers (2025-03-10T15:58:43Z) - Rethinking Relation Extraction: Beyond Shortcuts to Generalization with a Debiased Benchmark [53.876493664396506]
Benchmarks are crucial for evaluating machine learning algorithm performance, facilitating comparison and identifying superior solutions.
This paper addresses the issue of entity bias in relation extraction tasks, where models tend to rely on entity mentions rather than context.
We propose a debiased relation extraction benchmark DREB that breaks the pseudo-correlation between entity mentions and relation types through entity replacement.
To establish a new baseline on DREB, we introduce MixDebias, a debiasing method combining data-level and model training-level techniques.
arXiv Detail & Related papers (2025-01-02T17:01:06Z) - Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Bayesian Cramér-Rao Bound Estimation with Score-Based Models [3.4480437706804503]
The Bayesian Cram'er-Rao bound (CRB) provides a lower bound on the mean square error of any Bayesian estimator under mild regularity conditions.
This work introduces a new data-driven estimator for the CRB using score matching.
arXiv Detail & Related papers (2023-09-28T00:22:21Z) - Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net
Estimation and Optimization [58.90989478049686]
Bi-Drop is a fine-tuning strategy that selectively updates model parameters using gradients from various sub-nets.
Experiments on the GLUE benchmark demonstrate that Bi-Drop consistently outperforms previous fine-tuning methods.
arXiv Detail & Related papers (2023-05-24T06:09:26Z) - Principled Pruning of Bayesian Neural Networks through Variational Free
Energy Minimization [2.3999111269325266]
We formulate and apply Bayesian model reduction to perform principled pruning of Bayesian neural networks.
A novel iterative pruning algorithm is presented to alleviate the problems arising with naive Bayesian model reduction.
Our experiments indicate better model performance in comparison to state-of-the-art pruning schemes.
arXiv Detail & Related papers (2022-10-17T14:34:42Z) - Robust Neural Posterior Estimation and Statistical Model Criticism [1.5749416770494706]
We argue that modellers must treat simulators as idealistic representations of the true data generating process.
In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models.
We find that the presence of misspecification, in contrast, leads to unreliable inference when NPE is used naively.
arXiv Detail & Related papers (2022-10-12T20:06:55Z) - CGAN-EB: A Non-parametric Empirical Bayes Method for Crash Hotspot
Identification Using Conditional Generative Adversarial Networks: A
Real-world Crash Data Study [2.3204178451683264]
This paper is the continuation of the authors previous research, where a novel non-parametric EB method for modelling crash frequency data was proposed and evaluated.
Unlike parametric approaches, there is no need for a pre-specified underlying relationship between dependent and independent variables in the proposed CGAN-EB.
The proposed methodology is now applied to a real-world data set collected for road segments from 2012 to 2017 in Washington State.
arXiv Detail & Related papers (2021-12-16T21:22:56Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Calibration and Uncertainty Quantification of Bayesian Convolutional
Neural Networks for Geophysical Applications [0.0]
It is common to incorporate the uncertainty of predictions such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions.
It has been shown that popular Deep Learning-based models are often miscalibrated, and due to their deterministic nature, provide no means to interpret the uncertainty of their predictions.
We compare three different approaches obtaining probabilistic models based on convolutional neural networks in a Bayesian formalism.
arXiv Detail & Related papers (2021-05-25T17:54:23Z) - COMBO: Conservative Offline Model-Based Policy Optimization [120.55713363569845]
Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable.
We develop a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-actions.
We find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods.
arXiv Detail & Related papers (2021-02-16T18:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.