Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems
- URL: http://arxiv.org/abs/2305.12852v1
- Date: Mon, 22 May 2023 09:23:18 GMT
- Title: Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems
- Authors: Luzhe Huang, Jianing Li, Xiaofu Ding, Yijie Zhang, Hanlong Chen,
Aydogan Ozcan
- Abstract summary: Uncertainty estimation is critical for numerous applications of deep neural networks.
We show an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency.
- Score: 10.992084413881592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty estimation is critical for numerous applications of deep neural
networks and draws growing attention from researchers. Here, we demonstrate an
uncertainty quantification approach for deep neural networks used in inverse
problems based on cycle consistency. We build forward-backward cycles using the
physical forward model available and a trained deep neural network solving the
inverse problem at hand, and accordingly derive uncertainty estimators through
regression analysis on the consistency of these forward-backward cycles. We
theoretically analyze cycle consistency metrics and derive their relationship
with respect to uncertainty, bias, and robustness of the neural network
inference. To demonstrate the effectiveness of these cycle consistency-based
uncertainty estimators, we classified corrupted and out-of-distribution input
image data using some of the widely used image deblurring and super-resolution
neural networks as testbeds. The blind testing of our method outperformed other
models in identifying unseen input data corruption and distribution shifts.
This work provides a simple-to-implement and rapid uncertainty quantification
method that can be universally applied to various neural networks used for
solving inverse problems.
Related papers
- An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - To be or not to be stable, that is the question: understanding neural
networks for inverse problems [0.0]
In this paper, we theoretically analyze the trade-off between stability and accuracy of neural networks.
We propose different supervised and unsupervised solutions to increase the network stability and maintain a good accuracy.
arXiv Detail & Related papers (2022-11-24T16:16:40Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Stable, accurate and efficient deep neural networks for inverse problems
with analysis-sparse models [2.969705152497174]
We present a novel construction of an accurate, stable and efficient neural network for inverse problems with general analysis-sparse models.
To construct the network, we unroll NESTA, an accelerated first-order method for convex optimization.
A restart scheme is employed to enable exponential decay of the required network depth, yielding a shallower, and consequently more efficient, network.
arXiv Detail & Related papers (2022-03-02T00:44:25Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.