Measuring the Confidence of Traffic Forecasting Models: Techniques,
Experimental Comparison and Guidelines towards Their Actionability
- URL: http://arxiv.org/abs/2210.16049v1
- Date: Fri, 28 Oct 2022 10:49:55 GMT
- Title: Measuring the Confidence of Traffic Forecasting Models: Techniques,
Experimental Comparison and Guidelines towards Their Actionability
- Authors: Ibai La\~na, Ignacio (I\~naki) Olabarrieta, Javier Del Ser
- Abstract summary: Uncertainty estimation provides the user with augmented information about the model's confidence in its predicted outcome.
There is a thin consensus around the different types of uncertainty that one can gauge in machine learning models.
This work aims to cover this lack of research by reviewing different techniques and metrics of uncertainty available in the literature.
- Score: 7.489793155793319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The estimation of the amount of uncertainty featured by predictive machine
learning models has acquired a great momentum in recent years. Uncertainty
estimation provides the user with augmented information about the model's
confidence in its predicted outcome. Despite the inherent utility of this
information for the trustworthiness of the user, there is a thin consensus
around the different types of uncertainty that one can gauge in machine
learning models and the suitability of different techniques that can be used to
quantify the uncertainty of a specific model. This subject is mostly non
existent within the traffic modeling domain, even though the measurement of the
confidence associated to traffic forecasts can favor significantly their
actionability in practical traffic management systems. This work aims to cover
this lack of research by reviewing different techniques and metrics of
uncertainty available in the literature, and by critically discussing how
confidence levels computed for traffic forecasting models can be helpful for
researchers and practitioners working in this research area. To shed light with
empirical evidence, this critical discussion is further informed by
experimental results produced by different uncertainty estimation techniques
over real traffic data collected in Madrid (Spain), rendering a general
overview of the benefits and caveats of every technique, how they can be
compared to each other, and how the measured uncertainty decreases depending on
the amount, quality and diversity of data used to produce the forecasts.
Related papers
- Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception [1.7616042687330642]
We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
arXiv Detail & Related papers (2022-06-14T13:31:36Z) - How certain are your uncertainties? [0.3655021726150368]
Measures of uncertainty in the output of a deep learning method are useful in several ways.
This work investigates the stability of these uncertainty measurements, in terms of both magnitude and spatial pattern.
arXiv Detail & Related papers (2022-03-01T05:25:02Z) - Probabilistic Deep Learning to Quantify Uncertainty in Air Quality
Forecasting [5.007231239800297]
This work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts.
We describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability.
Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts.
arXiv Detail & Related papers (2021-12-05T17:01:18Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.