Training-Free Mitigation of Adversarial Attacks on Deep Learning-Based MRI Reconstruction
- URL: http://arxiv.org/abs/2501.01908v2
- Date: Sat, 15 Mar 2025 19:50:19 GMT
- Title: Training-Free Mitigation of Adversarial Attacks on Deep Learning-Based MRI Reconstruction
- Authors: Mahdi Saberi, Chi Zhang, Mehmet Akcakaya,
- Abstract summary: We propose a novel approach for mitigating adversarial attacks on MRI reconstruction models without any retraining.<n>We show that our method substantially reduces the impact of adversarial perturbations across different datasets.<n>We extend our mitigation method to two important practical scenarios: a blind setup and an adaptive attack setup.
- Score: 2.5943586090617377
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning (DL) methods, especially those based on physics-driven DL, have become the state-of-the-art for reconstructing sub-sampled magnetic resonance imaging (MRI) data. However, studies have shown that these methods are susceptible to small adversarial input perturbations, or attacks, resulting in major distortions in the output images. Various strategies have been proposed to reduce the effects of these attacks, but they require retraining and may lower reconstruction quality for non-perturbed/clean inputs. In this work, we propose a novel approach for mitigating adversarial attacks on MRI reconstruction models without any retraining. Our framework is based on the idea of cyclic measurement consistency. The output of the model is mapped to another set of MRI measurements for a different sub-sampling pattern, and this synthesized data is reconstructed with the same model. Intuitively, without an attack, the second reconstruction is expected to be consistent with the first, while with an attack, disruptions are present. A novel objective function is devised based on this idea, which is minimized within a small ball around the attack input for mitigation. Experimental results show that our method substantially reduces the impact of adversarial perturbations across different datasets, attack types/strengths and PD-DL networks, and qualitatively and quantitatively outperforms conventional mitigation methods that involve retraining. Finally, we extend our mitigation method to two important practical scenarios: a blind setup, where the attack strength or algorithm is not known to the end user; and an adaptive attack setup, where the attacker has full knowledge of the defense strategy. Our approach remains effective in both cases.
Related papers
- Investigating Privacy Leakage in Dimensionality Reduction Methods via Reconstruction Attack [0.0]
We develop a neural network capable of reconstructing high-dimensional data from low-dimensional embeddings.<n>We evaluate six popular dimensionality reduction techniques: PCA, sparse random projection (SRP), multidimensional scaling (MDS), Isomap, t-SNE, and UMAP.
arXiv Detail & Related papers (2024-08-30T09:40:52Z) - Data Reconstruction Attacks and Defenses: A Systematic Evaluation [27.34562026045369]
Reconstruction attacks and defenses are essential in understanding the data leakage problem in machine learning.
We propose to view the problem as an inverse problem, enabling us to theoretically and systematically evaluate the data reconstruction attack.
We propose a strong reconstruction attack that helps update some previous understanding of the strength of defense methods under our proposed evaluation metric.
arXiv Detail & Related papers (2024-02-13T05:06:34Z) - Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - Adversarial Robustness of MR Image Reconstruction under Realistic
Perturbations [40.35796592557175]
Adversarial attacks offer a valuable tool to understand possible failure modes and worst case performance of DL-based reconstruction algorithms.
We show that current state-of-the-art DL-based reconstruction algorithms are indeed sensitive to such perturbations to a degree where relevant diagnostic information may be lost.
arXiv Detail & Related papers (2022-08-05T13:39:40Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Zero-Shot Self-Supervised Learning for MRI Reconstruction [4.542616945567623]
We propose a zero-shot self-supervised learning approach to perform subject-specific accelerated DL MRI reconstruction.
The proposed approach partitions the available measurements from a single scan into three disjoint sets.
In the presence of models pre-trained on a database with different image characteristics, we show that the proposed approach can be combined with transfer learning for faster convergence time and reduced computational complexity.
arXiv Detail & Related papers (2021-02-15T18:34:38Z) - Exploiting epistemic uncertainty of the deep learning models to generate
adversarial samples [0.7734726150561088]
"Adversarial Machine Learning" aims to devise new adversarial attacks and to defend against these attacks with more robust architectures.
This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes.
Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.40%, 82.86% to 89.92% and 88.06% to 90.03% on datasets.
arXiv Detail & Related papers (2021-02-08T11:59:27Z) - Solving Inverse Problems With Deep Neural Networks -- Robustness
Included? [3.867363075280544]
Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks.
In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts.
This article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems.
arXiv Detail & Related papers (2020-11-09T09:33:07Z) - Adversarial Robust Training of Deep Learning MRI Reconstruction Models [0.0]
We employ adversarial attacks to generate small synthetic perturbations, which are difficult to reconstruct for a trained Deep Learning reconstruction network.
We then use robust training to increase the network's sensitivity to these small features and encourage their reconstruction.
Experimental results show that by introducing robust training to a reconstruction network, the rate of false negative features can be reduced.
arXiv Detail & Related papers (2020-10-30T19:26:14Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.