GRILL: Gradient Signal Restoration in Ill-Conditioned Layers to Enhance Adversarial Attacks on Autoencoders
- URL: http://arxiv.org/abs/2505.03646v3
- Date: Wed, 06 Aug 2025 10:10:21 GMT
- Title: GRILL: Gradient Signal Restoration in Ill-Conditioned Layers to Enhance Adversarial Attacks on Autoencoders
- Authors: Chethan Krishnamurthy Ramanaik, Arjun Roy, Tobias Callies, Eirini Ntoutsi,
- Abstract summary: We introduce GRILL, a technique that restores gradient signals in ill-conditioned layers, enabling more effective norm-bounded attacks.<n>We show that our method significantly increases the effectiveness of our adversarial attacks, enabling a more rigorous evaluation of AE robustness.
- Score: 4.046100165562807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial robustness of deep autoencoders (AEs) remains relatively unexplored, even though their non-invertible nature poses distinct challenges. Existing attack algorithms during the optimization of imperceptible, norm-bounded adversarial perturbations to maximize output damage in AEs, often stop at sub-optimal attacks. We observe that the adversarial loss gradient vanishes when backpropagated through ill-conditioned layers. This issue arises from near-zero singular values in the Jacobians of these layers, which weaken the gradient signal during optimization. We introduce GRILL, a technique that locally restores gradient signals in ill-conditioned layers, enabling more effective norm-bounded attacks. Through extensive experiments on different architectures of popular AEs, under both sample-specific and universal attack setups, and across standard and adaptive attack settings, we show that our method significantly increases the effectiveness of our adversarial attacks, enabling a more rigorous evaluation of AE robustness.
Related papers
- Evolution-based Region Adversarial Prompt Learning for Robustness Enhancement in Vision-Language Models [52.8949080772873]
We propose an evolution-based region adversarial prompt tuning method called ER-APT.<n>In each training iteration, we first generate AEs using traditional gradient-based methods.<n> Subsequently, a genetic evolution mechanism incorporating selection, mutation, and crossover is applied to optimize the AEs.<n>The final evolved AEs are used for prompt tuning, achieving region-based adversarial optimization instead of conventional single-point adversarial prompt tuning.
arXiv Detail & Related papers (2025-03-17T07:08:47Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Enhancing Adversarial Transferability with Adversarial Weight Tuning [36.09966860069978]
adversarial examples (AEs) mislead the model while appearing benign to human observers.<n>AWT is a data-free tuning method that combines gradient-based and model-based attack methods to enhance the transferability of AEs.
arXiv Detail & Related papers (2024-08-18T13:31:26Z) - On Minimizing Adversarial Counterfactual Error in Adversarial RL [18.044879441434432]
adversarial noise poses significant risks in safety-critical scenarios.<n>We introduce a novel objective called Adversarial Counterfactual Error (ACoE)<n>Our method significantly outperforms current state-of-the-art approaches for addressing adversarial RL challenges.
arXiv Detail & Related papers (2024-06-07T08:14:24Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Explaining the role of Intrinsic Dimensionality in Adversarial Training [31.495803865226158]
We show that off-manifold adversarial examples (AEs) enhance robustness, while on-manifold AEs improve generalization.<n>We introduce SMAAT, which improves the scalability of AT for encoder-based models by perturbing the layer with the lowest intrinsic dimensionality.<n>We validate SMAAT across multiple tasks, including text generation, sentiment classification, safety filtering, and retrieval augmented generation setups.
arXiv Detail & Related papers (2024-05-27T12:48:30Z) - Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization [50.43319961935526]
Single-step adversarial training (SSAT) has demonstrated the potential to achieve both efficiency and robustness.
SSAT suffers from catastrophic overfitting (CO), a phenomenon that leads to a severely distorted classifier.
In this work, we observe that some adversarial examples generated on the SSAT-trained network exhibit anomalous behaviour.
arXiv Detail & Related papers (2024-04-11T22:43:44Z) - SSTA: Salient Spatially Transformed Attack [18.998300969035885]
Deep neural networks (DNNs) are vulnerable to adversarial attacks.
In this paper, we propose the Salient Spatially Transformed Attack (SSTA) to craft imperceptible adversarial example (AE)
Compared to state-of-the-art baselines, experiments indicated that SSTA could effectively improve the imperceptibility of the AEs while maintaining a 100% attack success rate.
arXiv Detail & Related papers (2023-12-12T13:38:00Z) - LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via
Latent Ensemble Attack [11.764601181046496]
Deepfakes, malicious visual contents created by generative models, pose an increasingly harmful threat to society.
To proactively mitigate deepfake damages, recent studies have employed adversarial perturbation to disrupt deepfake model outputs.
We propose a simple yet effective disruption method called Latent Ensemble ATtack (LEAT), which attacks the independent latent encoding process.
arXiv Detail & Related papers (2023-07-04T07:00:37Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Towards Efficient and Domain-Agnostic Evasion Attack with
High-dimensional Categorical Inputs [33.36532022853583]
Our work targets at searching feasible adversarial to attack a perturbation with high-dimensional categorical inputs in a domain-agnostic setting.
Our proposed method, namely FEAT, treats modifying each categorical feature as pulling an arm in multi-armed bandit programming.
Our work further hints the applicability of FEAT for assessing the adversarial vulnerability of classification systems with high-dimensional categorical inputs.
arXiv Detail & Related papers (2022-12-13T18:45:00Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Unreasonable Effectiveness of Last Hidden Layer Activations [0.5156484100374058]
We show that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases.
We've experimentally verified the efficacy of our approach on MNIST (Digit), CIFAR10 datasets.
arXiv Detail & Related papers (2022-02-15T12:02:59Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.