Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance
- URL: http://arxiv.org/abs/2506.10459v1
- Date: Thu, 12 Jun 2025 08:08:52 GMT
- Title: Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance
- Authors: Chun Liu, Bingqian Zhu, Tao Xu, Zheng Zheng, Zheng Li, Wei Yang, Zhigang Han, Jiayao Wang,
- Abstract summary: Hyperspectral image (HSI) classification technologies based on Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.<n>This paper proposes a novel method to enhance the transferability of the adversarial examples for HSI classification models.<n>The proposed method maintains robust attack performance even under defense strategies.
- Score: 12.577452125758368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, which pose security challenges to hyperspectral image (HSI) classification technologies based on DNNs. In the domain of natural images, numerous transfer-based adversarial attack methods have been studied. However, HSIs differ from natural images due to their high-dimensional and rich spectral information. Current research on HSI adversarial examples remains limited and faces challenges in fully utilizing the structural and feature information of images. To address these issues, this paper proposes a novel method to enhance the transferability of the adversarial examples for HSI classification models. First, while keeping the image structure unchanged, the proposed method randomly divides the image into blocks in both spatial and spectral dimensions. Then, various transformations are applied on a block by block basis to increase input diversity and mitigate overfitting. Second, a feature distancing loss targeting intermediate layers is designed, which measures the distance between the amplified features of the original examples and the features of the adversarial examples as the primary loss, while the output layer prediction serves as the auxiliary loss. This guides the perturbation to disrupt the features of the true class in adversarial examples, effectively enhancing transferability. Extensive experiments demonstrate that the adversarial examples generated by the proposed method achieve effective transferability to black-box models on two public HSI datasets. Furthermore, the method maintains robust attack performance even under defense strategies.
Related papers
- Boosting Adversarial Transferability Against Defenses via Multi-Scale Transformation [0.8388591755871736]
The transferability of adversarial examples poses a significant security challenge for deep neural networks.<n>We propose a new Segmented Gaussian Pyramid (SGP) attack method to enhance the transferability.<n>In contrast to the state-of-the-art methods, SGP significantly enhances attack success rates against black-box defense models.
arXiv Detail & Related papers (2025-07-02T15:16:30Z) - Active Adversarial Noise Suppression for Image Forgery Localization [56.98050814363447]
We introduce an Adversarial Noise Suppression Module (ANSM) that generate a defensive perturbation to suppress the attack effect of adversarial noise.<n>To our best knowledge, this is the first report of adversarial defense in image forgery localization tasks.
arXiv Detail & Related papers (2025-06-15T14:53:27Z) - Enhancing Adversarial Transferability via Component-Wise Transformation [28.209214055953844]
This paper proposes a novel input-based attack method, termed Component-Wise Transformation (CWT)<n>CWT applies selective rotation to individual image blocks, ensuring that each transformed image highlights different target regions.<n>Experiments on the standard ImageNet dataset show that CWT consistently outperforms state-of-the-art methods in both attack success rates and stability.
arXiv Detail & Related papers (2025-01-21T05:41:09Z) - Boosting Adversarial Transferability with Spatial Adversarial Alignment [30.343721474168635]
Deep neural networks are vulnerable to adversarial examples that exhibit transferability across various models.<n>We propose a technique that employs an alignment loss and leverages a witness model to fine-tune the surrogate model.<n>Experiments on various architectures on ImageNet show that aligned surrogate models based on SAA can provide higher transferable adversarial examples.
arXiv Detail & Related papers (2025-01-02T02:35:47Z) - Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
Cross-modality person re-identification (ReID) systems are based on RGB images.<n>Main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.<n>Existing attack methods have primarily focused on the characteristics of the visible image modality.<n>This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - A Novel Cross-Perturbation for Single Domain Generalization [54.612933105967606]
Single domain generalization aims to enhance the ability of the model to generalize to unknown domains when trained on a single source domain.
The limited diversity in the training data hampers the learning of domain-invariant features, resulting in compromised generalization performance.
We propose CPerb, a simple yet effective cross-perturbation method to enhance the diversity of the training data.
arXiv Detail & Related papers (2023-08-02T03:16:12Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.