NoPain: No-box Point Cloud Attack via Optimal Transport Singular Boundary
- URL: http://arxiv.org/abs/2503.00063v4
- Date: Tue, 29 Apr 2025 16:41:00 GMT
- Title: NoPain: No-box Point Cloud Attack via Optimal Transport Singular Boundary
- Authors: Zezeng Li, Xiaoyu Du, Na Lei, Liming Chen, Weimin Wang,
- Abstract summary: Adversarial attacks exploit the vulnerability of deep models against adversarial samples.<n>Existing point cloud attackers are tailored to specific models, iteratively optimizing perturbations based on gradients in either a white-box or black-box setting.<n>We introduce NoPain, which employs optimal transport (OT) to identify the inherent singular boundaries of the data manifold for point cloud attacks.
- Score: 13.46515999093558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks exploit the vulnerability of deep models against adversarial samples. Existing point cloud attackers are tailored to specific models, iteratively optimizing perturbations based on gradients in either a white-box or black-box setting. Despite their promising attack performance, they often struggle to produce transferable adversarial samples due to overfitting the specific parameters of surrogate models. To overcome this issue, we shift our focus to the data distribution itself and introduce a novel approach named NoPain, which employs optimal transport (OT) to identify the inherent singular boundaries of the data manifold for cross-network point cloud attacks. Specifically, we first calculate the OT mapping from noise to the target feature space, then identify singular boundaries by locating non-differentiable positions. Finally, we sample along singular boundaries to generate adversarial point clouds. Once the singular boundaries are determined, NoPain can efficiently produce adversarial samples without the need of iterative updates or guidance from the surrogate classifiers. Extensive experiments demonstrate that the proposed end-to-end method outperforms baseline approaches in terms of both transferability and efficiency, while also maintaining notable advantages even against defense strategies. Code and model are available at https://github.com/cognaclee/nopain
Related papers
- Hard-Label Black-Box Attacks on 3D Point Clouds [66.52447238776482]
We introduce a novel 3D attack method based on a new spectrum-aware decision boundary algorithm to generate high-quality adversarial samples.<n>Experiments demonstrate that our attack competitively outperforms existing white/black-box attackers in terms of attack performance and adversary quality.
arXiv Detail & Related papers (2024-11-30T09:05:02Z) - A Privacy-Preserving Walk in the Latent Space of Generative Models for
Medical Applications [11.39717289910264]
Generative Adversarial Networks (GANs) have demonstrated their ability to generate synthetic samples that match a target distribution.
GANs tend to embed near-duplicates of real samples in the latent space.
We propose a latent space navigation strategy able to generate diverse synthetic samples that may support effective training of deep models.
arXiv Detail & Related papers (2023-07-06T13:35:48Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete
Sequential Data via Bayesian Optimization [10.246596695310176]
We focus on the problem of adversarial attacks against models on discrete sequential data in the black-box setting.
We propose a query-efficient black-box attack using Bayesian optimization, which dynamically computes important positions.
We develop a post-optimization algorithm that finds adversarial examples with smaller perturbation size.
arXiv Detail & Related papers (2022-06-17T06:11:36Z) - RamBoAttack: A Robust Query Efficient Deep Neural Network Decision
Exploit [9.93052896330371]
We develop a robust query efficient attack capable of avoiding entrapment in a local minimum and misdirection from noisy gradients.
The RamBoAttack is more robust to the different sample inputs available to an adversary and the targeted class.
arXiv Detail & Related papers (2021-12-10T01:25:24Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - Local Aggressive Adversarial Attacks on 3D Point Cloud [12.121901103987712]
Deep neural networks are prone to adversarial examples which could deliberately fool the model to make mistakes.
In this paper, we propose a local aggressive adversarial attacks (L3A) to solve above issues.
Experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art performance of our method.
arXiv Detail & Related papers (2021-05-19T12:22:56Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - BERT-ATTACK: Adversarial Attack Against BERT Using BERT [77.82947768158132]
Adrial attacks for discrete data (such as texts) are more challenging than continuous data (such as images)
We propose textbfBERT-Attack, a high-quality and effective method to generate adversarial samples.
Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage.
arXiv Detail & Related papers (2020-04-21T13:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.