Localizing Adversarial Attacks To Produces More Imperceptible Noise
- URL: http://arxiv.org/abs/2509.22710v1
- Date: Tue, 23 Sep 2025 22:33:02 GMT
- Title: Localizing Adversarial Attacks To Produces More Imperceptible Noise
- Authors: Pavan Reddy, Aditya Sanjay Gujral,
- Abstract summary: This study systematically evaluates localized adversarial attacks across widely-used methods, including FGSM, PGD, and C&W.<n>By introducing a binary mask to constrain noise to specific regions, localized attacks achieve significantly lower mean pixel perturbations, higher Peak Signal-to-Noise Ratios (PSNR), and improved Structural Similarity Index (SSIM) compared to global attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks in machine learning traditionally focus on global perturbations to input data, yet the potential of localized adversarial noise remains underexplored. This study systematically evaluates localized adversarial attacks across widely-used methods, including FGSM, PGD, and C&W, to quantify their effectiveness, imperceptibility, and computational efficiency. By introducing a binary mask to constrain noise to specific regions, localized attacks achieve significantly lower mean pixel perturbations, higher Peak Signal-to-Noise Ratios (PSNR), and improved Structural Similarity Index (SSIM) compared to global attacks. However, these benefits come at the cost of increased computational effort and a modest reduction in Attack Success Rate (ASR). Our results highlight that iterative methods, such as PGD and C&W, are more robust to localization constraints than single-step methods like FGSM, maintaining higher ASR and imperceptibility metrics. This work provides a comprehensive analysis of localized adversarial attacks, offering practical insights for advancing attack strategies and designing robust defensive systems.
Related papers
- SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition [4.643429435927802]
Space-Reweighted Adrial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation.<n>Experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models.
arXiv Detail & Related papers (2026-01-15T12:09:49Z) - Generating Transferrable Adversarial Examples via Local Mixing and Logits Optimization for Remote Sensing Object Recognition [5.6536225368328274]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.<n>In this paper, we propose a novel framework via local mixing and logits optimization.<n>Our method achieves a 17.28% average improvement in black-box attack success rate.
arXiv Detail & Related papers (2025-09-09T08:20:19Z) - GRILL: Gradient Signal Restoration in Ill-Conditioned Layers to Enhance Adversarial Attacks on Autoencoders [4.046100165562807]
We introduce GRILL, a technique that restores gradient signals in ill-conditioned layers, enabling more effective norm-bounded attacks.<n>We show that our method significantly increases the effectiveness of our adversarial attacks, enabling a more rigorous evaluation of AE robustness.
arXiv Detail & Related papers (2025-05-06T15:52:14Z) - Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs [83.11815479874447]
We propose a novel jailbreak attack framework, inspired by cognitive decomposition and biases in human cognition.<n>We employ cognitive decomposition to reduce the complexity of malicious prompts and relevance bias to reorganize prompts.<n>We also introduce a ranking-based harmfulness evaluation metric that surpasses the traditional binary success-or-failure paradigm.
arXiv Detail & Related papers (2025-05-03T05:28:11Z) - Hierarchical Local-Global Feature Learning for Few-shot Malicious Traffic Detection [6.118242543398087]
Malicious network attacks have become increasingly frequent and sophisticated.<n>Traditional detection methods, including rule-based and machine learning-based approaches, struggle to accurately identify emerging threats.<n>We propose HLoG, a novel hierarchical few-shot malicious traffic detection framework.
arXiv Detail & Related papers (2025-04-01T14:56:44Z) - Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - Rethinking Targeted Adversarial Attacks For Neural Machine Translation [56.10484905098989]
This paper presents a new setting for NMT targeted adversarial attacks that could lead to reliable attacking results.
Under the new setting, it then proposes a Targeted Word Gradient adversarial Attack (TWGA) method to craft adversarial examples.
Experimental results demonstrate that our proposed setting could provide faithful attacking results for targeted adversarial attacks on NMT systems.
arXiv Detail & Related papers (2024-07-07T10:16:06Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time
Series [11.356275885051442]
Time series classification (TSC) has emerged as a critical task in various domains.
Deep neural models have shown superior performance in TSC tasks.
TSC models are vulnerable to adversarial attacks.
We propose SWAP, a novel attacking method for TSC models.
arXiv Detail & Related papers (2023-09-06T06:17:35Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Towards Robust Speech-to-Text Adversarial Attack [78.5097679815944]
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo.
Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation.
Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings.
arXiv Detail & Related papers (2021-03-15T01:51:41Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.