Solving Inverse Problems With Deep Neural Networks -- Robustness
Included?
- URL: http://arxiv.org/abs/2011.04268v1
- Date: Mon, 9 Nov 2020 09:33:07 GMT
- Title: Solving Inverse Problems With Deep Neural Networks -- Robustness
Included?
- Authors: Martin Genzel and Jan Macdonald and Maximilian M\"arz
- Abstract summary: Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks.
In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts.
This article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems.
- Score: 3.867363075280544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past five years, deep learning methods have become state-of-the-art in
solving various inverse problems. Before such approaches can find application
in safety-critical fields, a verification of their reliability appears
mandatory. Recent works have pointed out instabilities of deep neural networks
for several image reconstruction tasks. In analogy to adversarial attacks in
classification, it was shown that slight distortions in the input domain may
cause severe artifacts. The present article sheds new light on this concern, by
conducting an extensive study of the robustness of deep-learning-based
algorithms for solving underdetermined inverse problems. This covers compressed
sensing with Gaussian measurements as well as image recovery from Fourier and
Radon measurements, including a real-world scenario for magnetic resonance
imaging (using the NYU-fastMRI dataset). Our main focus is on computing
adversarial perturbations of the measurements that maximize the reconstruction
error. A distinctive feature of our approach is the quantitative and
qualitative comparison with total-variation minimization, which serves as a
provably robust reference method. In contrast to previous findings, our results
reveal that standard end-to-end network architectures are not only resilient
against statistical noise, but also against adversarial perturbations. All
considered networks are trained by common deep learning techniques, without
sophisticated defense strategies.
Related papers
- Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems [10.992084413881592]
Uncertainty estimation is critical for numerous applications of deep neural networks.
We show an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency.
arXiv Detail & Related papers (2023-05-22T09:23:18Z) - A Neural-Network-Based Convex Regularizer for Inverse Problems [14.571246114579468]
Deep-learning methods to solve image-reconstruction problems have enabled a significant increase in reconstruction quality.
These new methods often lack reliability and explainability, and there is a growing interest to address these shortcomings.
In this work, we tackle this issue by revisiting regularizers that are the sum of convex-ridge functions.
The gradient of such regularizers is parameterized by a neural network that has a single hidden layer with increasing and learnable activation functions.
arXiv Detail & Related papers (2022-11-22T18:19:10Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - A Compact Deep Learning Model for Face Spoofing Detection [4.250231861415827]
presentation attack detection (PAD) has received significant attention from research communities.
We address the problem via fusing both wide and deep features in a unified neural architecture.
The procedure is done on different spoofing datasets such as ROSE-Youtu, SiW and NUAA Imposter.
arXiv Detail & Related papers (2021-01-12T21:20:09Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Denoising Score-Matching for Uncertainty Quantification in Inverse
Problems [1.521936393554569]
We propose a generic Bayesian framework forsolving inverse problems, in which we limit the use of deep neural networks tolearning a prior distribution on the signals to recover.
We apply this framework to Magnetic ResonanceImage (MRI) reconstruction and illustrate how this approach can also be used to assess the uncertainty on particularfeatures of a reconstructed image.
arXiv Detail & Related papers (2020-11-16T18:33:06Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Improving Robustness of Deep-Learning-Based Image Reconstruction [24.882806652224854]
We show that for inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space.
We introduce an auxiliary network to generate adversarial examples, which is used in a min-max formulation to build robust image reconstruction networks.
We find that a linear network using the proposed min-max learning scheme indeed converges to the same solution.
arXiv Detail & Related papers (2020-02-26T22:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.