Adversarial Robustness of MR Image Reconstruction under Realistic
Perturbations
- URL: http://arxiv.org/abs/2208.03161v1
- Date: Fri, 5 Aug 2022 13:39:40 GMT
- Title: Adversarial Robustness of MR Image Reconstruction under Realistic
Perturbations
- Authors: Jan Nikolas Morshuis and Sergios Gatidis and Matthias Hein and
Christian F. Baumgartner
- Abstract summary: Adversarial attacks offer a valuable tool to understand possible failure modes and worst case performance of DL-based reconstruction algorithms.
We show that current state-of-the-art DL-based reconstruction algorithms are indeed sensitive to such perturbations to a degree where relevant diagnostic information may be lost.
- Score: 40.35796592557175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning (DL) methods have shown promising results for solving ill-posed
inverse problems such as MR image reconstruction from undersampled $k$-space
data. However, these approaches currently have no guarantees for reconstruction
quality and the reliability of such algorithms is only poorly understood.
Adversarial attacks offer a valuable tool to understand possible failure modes
and worst case performance of DL-based reconstruction algorithms. In this paper
we describe adversarial attacks on multi-coil $k$-space measurements and
evaluate them on the recently proposed E2E-VarNet and a simpler UNet-based
model. In contrast to prior work, the attacks are targeted to specifically
alter diagnostically relevant regions. Using two realistic attack models
(adversarial $k$-space noise and adversarial rotations) we are able to show
that current state-of-the-art DL-based reconstruction algorithms are indeed
sensitive to such perturbations to a degree where relevant diagnostic
information may be lost. Surprisingly, in our experiments the UNet and the more
sophisticated E2E-VarNet were similarly sensitive to such attacks. Our findings
add further to the evidence that caution must be exercised as DL-based methods
move closer to clinical practice.
Related papers
- Detecting and Mitigating Adversarial Attacks on Deep Learning-Based MRI Reconstruction Without Any Retraining [2.5943586090617377]
We propose a novel approach for detecting and mitigating adversarial attacks on MRI reconstruction models without any retraining.
Our detection strategy is based on the idea of cyclic measurement consistency.
We show that our method substantially reduces the impact of adversarial perturbations across different datasets.
arXiv Detail & Related papers (2025-01-03T17:23:52Z) - Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense [5.150608040339816]
We introduce PADL, a new solution able to generate image-specific perturbations using a symmetric scheme of encoding and decoding based on cross-attention.
Our method generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusionXL.
arXiv Detail & Related papers (2024-09-26T15:16:32Z) - Evaluating Adversarial Robustness of Low dose CT Recovery [15.436044993406966]
We evaluate the robustness of different deep learning approaches and classical methods for low dose CT recovery.
We show that deep networks, including model-based networks encouraging data consistency, are more susceptible to untargeted attacks.
As the resulting reconstructions have high data consistency with the original measurements, these localized attacks can be used to explore the solution space of the CT recovery problem.
arXiv Detail & Related papers (2024-02-18T11:57:01Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z) - Improving Robustness of Deep-Learning-Based Image Reconstruction [24.882806652224854]
We show that for inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space.
We introduce an auxiliary network to generate adversarial examples, which is used in a min-max formulation to build robust image reconstruction networks.
We find that a linear network using the proposed min-max learning scheme indeed converges to the same solution.
arXiv Detail & Related papers (2020-02-26T22:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.