SegTrans: Transferable Adversarial Examples for Segmentation Models
- URL: http://arxiv.org/abs/2510.08922v1
- Date: Fri, 10 Oct 2025 02:11:29 GMT
- Title: SegTrans: Transferable Adversarial Examples for Segmentation Models
- Authors: Yufei Song, Ziqi Zhou, Qi Lu, Hangtao Zhang, Yifan Hu, Lulu Xue, Shengshan Hu, Minghui Li, Leo Yu Zhang,
- Abstract summary: Existing adversarial attack methods show poor transferability across different segmentation models.<n>We propose SegTrans, a novel transfer attack framework that divides the input sample into multiple local regions.<n>SegTrans only retains local semantic information from the original input, rather than using global semantic information to optimize perturbations.
- Score: 47.89859018981832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation models exhibit significant vulnerability to adversarial examples in white-box settings, but existing adversarial attack methods often show poor transferability across different segmentation models. While some researchers have explored transfer-based adversarial attack (i.e., transfer attack) methods for segmentation models, the complex contextual dependencies within these models and the feature distribution gaps between surrogate and target models result in unsatisfactory transfer success rates. To address these issues, we propose SegTrans, a novel transfer attack framework that divides the input sample into multiple local regions and remaps their semantic information to generate diverse enhanced samples. These enhanced samples replace the original ones for perturbation optimization, thereby improving the transferability of adversarial examples across different segmentation models. Unlike existing methods, SegTrans only retains local semantic information from the original input, rather than using global semantic information to optimize perturbations. Extensive experiments on two benchmark datasets, PASCAL VOC and Cityscapes, four different segmentation models, and three backbone networks show that SegTrans significantly improves adversarial transfer success rates without introducing additional computational overhead. Compared to the current state-of-the-art methods, SegTrans achieves an average increase of 8.55% in transfer attack success rate and improves computational efficiency by more than 100%.
Related papers
- X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP [32.85582585781569]
We introduce textbfX-Transfer, a novel attack method that exposes a universal adversarial vulnerability in CLIP.<n>X-Transfer generates a Universal Adversarial Perturbation capable of deceiving various CLIP encoders and downstream VLMs across different samples, tasks, and domains.
arXiv Detail & Related papers (2025-05-08T11:59:13Z) - A Simple DropConnect Approach to Transfer-based Targeted Attack [43.039945949426546]
We study the problem of transfer-based black-box attack, where adversarial samples generated using a single surrogate model are directly applied to target models.<n>We propose to Mitigate perturbation Co-adaptation by DropConnect to enhance transferability.<n>In the challenging scenario of transferring from a CNN-based model to Transformer-based models, MCD achieves 13% higher average ASRs compared with state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-24T12:29:23Z) - Boosting the Local Invariance for Better Adversarial Transferability [4.75067406339309]
Transfer-based attacks pose a significant threat to real-world applications.<n>We propose a general adversarial transferability boosting technique called Local Invariance Boosting approach (LI-Boost)<n>Experiments on the standard ImageNet dataset demonstrate that LI-Boost could significantly boost various types of transfer-based attacks.
arXiv Detail & Related papers (2025-03-08T09:44:45Z) - Transfer Learning of Surrogate Models: Integrating Domain Warping and Affine Transformations [4.515998639772672]
Surrogate models provide efficient alternatives to computationally demanding real world processes.<n>Previous studies have investigated the transfer of differentiable and non-differentiable surrogate models.<n>This paper extends previous research by addressing a broader range of transformations.
arXiv Detail & Related papers (2025-01-30T13:46:48Z) - Boosting Adversarial Transferability with Spatial Adversarial Alignment [56.97809949196889]
Deep neural networks are vulnerable to adversarial examples that exhibit transferability across various models.<n>We propose a technique that employs an alignment loss and leverages a witness model to fine-tune the surrogate model.<n>Experiments on various architectures on ImageNet show that aligned surrogate models based on SAA can provide higher transferable adversarial examples.
arXiv Detail & Related papers (2025-01-02T02:35:47Z) - GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Improving Adversarial Transferability with Scheduled Step Size and Dual
Example [33.00528131208799]
We show that transferability of adversarial examples generated by the iterative fast gradient sign method exhibits a decreasing trend when increasing the number of iterations.
We propose a novel strategy, which uses the Scheduled step size and the Dual example (SD) to fully utilize the adversarial information near the benign sample.
Our proposed strategy can be easily integrated with existing adversarial attack methods for better adversarial transferability.
arXiv Detail & Related papers (2023-01-30T15:13:46Z) - Boosting Transferability of Targeted Adversarial Examples via
Hierarchical Generative Networks [56.96241557830253]
Transfer-based adversarial attacks can effectively evaluate model robustness in the black-box setting.
We propose a conditional generative attacking model, which can generate the adversarial examples targeted at different classes.
Our method improves the success rates of targeted black-box attacks by a significant margin over the existing methods.
arXiv Detail & Related papers (2021-07-05T06:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.