Improving Robustness of Deep-Learning-Based Image Reconstruction
- URL: http://arxiv.org/abs/2002.11821v1
- Date: Wed, 26 Feb 2020 22:12:36 GMT
- Title: Improving Robustness of Deep-Learning-Based Image Reconstruction
- Authors: Ankit Raj, Yoram Bresler, Bo Li
- Abstract summary: We show that for inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space.
We introduce an auxiliary network to generate adversarial examples, which is used in a min-max formulation to build robust image reconstruction networks.
We find that a linear network using the proposed min-max learning scheme indeed converges to the same solution.
- Score: 24.882806652224854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning-based methods for different applications have been shown
vulnerable to adversarial examples. These examples make deployment of such
models in safety-critical tasks questionable. Use of deep neural networks as
inverse problem solvers has generated much excitement for medical imaging
including CT and MRI, but recently a similar vulnerability has also been
demonstrated for these tasks. We show that for such inverse problem solvers,
one should analyze and study the effect of adversaries in the
measurement-space, instead of the signal-space as in previous work. In this
paper, we propose to modify the training strategy of end-to-end
deep-learning-based inverse problem solvers to improve robustness. We introduce
an auxiliary network to generate adversarial examples, which is used in a
min-max formulation to build robust image reconstruction networks.
Theoretically, we show for a linear reconstruction scheme the min-max
formulation results in a singular-value(s) filter regularized solution, which
suppresses the effect of adversarial examples occurring because of
ill-conditioning in the measurement matrix. We find that a linear network using
the proposed min-max learning scheme indeed converges to the same solution. In
addition, for non-linear Compressed Sensing (CS) reconstruction using deep
networks, we show significant improvement in robustness using the proposed
approach over other methods. We complement the theory by experiments for CS on
two different datasets and evaluate the effect of increasing perturbations on
trained networks. We find the behavior for ill-conditioned and well-conditioned
measurement matrices to be qualitatively different.
Related papers
- Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Adaptive Anomaly Detection in Network Flows with Low-Rank Tensor Decompositions and Deep Unrolling [9.20186865054847]
Anomaly detection (AD) is increasingly recognized as a key component for ensuring the resilience of future communication systems.
This work considers AD in network flows using incomplete measurements.
We propose a novel block-successive convex approximation algorithm based on a regularized model-fitting objective.
Inspired by Bayesian approaches, we extend the model architecture to perform online adaptation to per-flow and per-time-step statistics.
arXiv Detail & Related papers (2024-09-17T19:59:57Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - An End-To-End-Trainable Iterative Network Architecture for Accelerated
Radial Multi-Coil 2D Cine MR Image Reconstruction [4.233498905999929]
We propose a CNN-architecture for image reconstruction of accelerated 2D radial cine MRI with multiple receiver coils.
We investigate the proposed training-strategy and compare our method to other well-known reconstruction techniques with learned and non-learned regularization methods.
arXiv Detail & Related papers (2021-02-01T11:42:04Z) - Solving Inverse Problems With Deep Neural Networks -- Robustness
Included? [3.867363075280544]
Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks.
In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts.
This article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems.
arXiv Detail & Related papers (2020-11-09T09:33:07Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.