STUaNet: Understanding uncertainty in spatiotemporal collective human
mobility
- URL: http://arxiv.org/abs/2102.06027v1
- Date: Tue, 9 Feb 2021 01:43:27 GMT
- Title: STUaNet: Understanding uncertainty in spatiotemporal collective human
mobility
- Authors: Zhengyang Zhou, Yang Wang, Xike Xie, Lei Qiao, Yuantao Li
- Abstract summary: We propose an uncertainty learning mechanism to simultaneously estimate internal data quality and external uncertainty regarding various contextual interactions.
We show that our proposed model is superior in terms of both forecasting and uncertainty quantification.
- Score: 11.436035608461966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The high dynamics and heterogeneous interactions in the complicated urban
systems have raised the issue of uncertainty quantification in spatiotemporal
human mobility, to support critical decision-makings in risk-aware web
applications such as urban event prediction where fluctuations are of
significant interests. Given the fact that uncertainty quantifies the potential
variations around prediction results, traditional learning schemes always lack
uncertainty labels, and conventional uncertainty quantification approaches
mostly rely upon statistical estimations with Bayesian Neural Networks or
ensemble methods. However, they have never involved any spatiotemporal
evolution of uncertainties under various contexts, and also have kept suffering
from the poor efficiency of statistical uncertainty estimation while training
models with multiple times. To provide high-quality uncertainty quantification
for spatiotemporal forecasting, we propose an uncertainty learning mechanism to
simultaneously estimate internal data quality and quantify external uncertainty
regarding various contextual interactions. To address the issue of lacking
labels of uncertainty, we propose a hierarchical data turbulence scheme where
we can actively inject controllable uncertainty for guidance, and hence provide
insights to both uncertainty quantification and weak supervised learning.
Finally, we re-calibrate and boost the prediction performance by devising a
gated-based bridge to adaptively leverage the learned uncertainty into
predictions. Extensive experiments on three real-world spatiotemporal mobility
sets have corroborated the superiority of our proposed model in terms of both
forecasting and uncertainty quantification.
Related papers
- Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Integrating Uncertainty into Neural Network-based Speech Enhancement [27.868722093985006]
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
arXiv Detail & Related papers (2023-05-15T15:55:12Z) - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning [38.34033824352067]
Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
We propose to develop explainable and actionable Bayesian deep learning methods to perform accurate uncertainty quantification.
arXiv Detail & Related papers (2023-04-10T19:14:15Z) - On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks [11.929914721626849]
We show that state-of-the-art uncertainty estimation algorithms could fail catastrophically under our proposed adversarial attack.
In particular, we aim at attacking the out-domain uncertainty estimation.
arXiv Detail & Related papers (2022-10-03T23:33:38Z) - Uncertainty Quantification for Traffic Forecasting: A Unified Approach [21.556559649467328]
Uncertainty is an essential consideration for time series forecasting tasks.
In this work, we focus on quantifying the uncertainty of traffic forecasting.
We develop Deep S-Temporal Uncertainty Quantification (STUQ), which can estimate both aleatoric and relational uncertainty.
arXiv Detail & Related papers (2022-08-11T15:21:53Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.