Optimal Uncertainty-guided Neural Network Training
- URL: http://arxiv.org/abs/1912.12761v1
- Date: Mon, 30 Dec 2019 00:03:28 GMT
- Title: Optimal Uncertainty-guided Neural Network Training
- Authors: H M Dipu Kabir, Abbas Khosravi, Abdollah Kavousi-Fard, Saeid
Nahavandi, Dipti Srinivasan
- Abstract summary: We propose a highly customizable smooth cost function for developing NNs to construct optimal PIs.
Results show that the proposed method reduces variation in the quality of PIs, accelerates the training, and improves convergence probability from 99.2% to 99.8%.
- Score: 14.768115786212187
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The neural network (NN)-based direct uncertainty quantification (UQ) methods
have achieved the state of the art performance since the first inauguration,
known as the lower-upper-bound estimation (LUBE) method. However,
currently-available cost functions for uncertainty guided NN training are not
always converging and all converged NNs are not generating optimized prediction
intervals (PIs). Moreover, several groups have proposed different quality
criteria for PIs. These raise a question about their relative effectiveness.
Most of the existing cost functions of uncertainty guided NN training are not
customizable and the convergence of training is uncertain. Therefore, in this
paper, we propose a highly customizable smooth cost function for developing NNs
to construct optimal PIs. The optimized average width of PIs, PI-failure
distances and the PI coverage probability (PICP) are computed for the test
dataset. The performance of the proposed method is examined for the wind power
generation and the electricity demand data. Results show that the proposed
method reduces variation in the quality of PIs, accelerates the training, and
improves convergence probability from 99.2% to 99.8%.
Related papers
- RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Conformalized Physics-Informed Neural Networks [0.8437187555622164]
We introduce Conformalized PINNs (C-PINNs) to quantify the uncertainty of PINNs.
C-PINNs utilize the framework of conformal prediction to quantify the uncertainty of PINNs.
arXiv Detail & Related papers (2024-05-13T18:45:25Z) - A Benchmark on Uncertainty Quantification for Deep Learning Prognostics [0.0]
We assess some of the latest developments in the field of uncertainty quantification for prognostics deep learning.
This includes the state-of-the-art variational inference algorithms for Bayesian neural networks (BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep ensembles (DE) and heteroscedastic neural networks (HNN)
The performance of the methods is evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines.
arXiv Detail & Related papers (2023-02-09T16:12:47Z) - Failure-informed adaptive sampling for PINNs [5.723850818203907]
Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains.
Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures.
We present an adaptive approach termed failure-informed PINNs, which is inspired by the viewpoint of reliability analysis.
arXiv Detail & Related papers (2022-10-01T13:34:41Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing
Platforms [10.86298377998459]
We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems.
The framework is built from the ground up to provide predictive uncertainty based only on one forward pass.
Our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance.
arXiv Detail & Related papers (2021-02-11T11:44:32Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.