Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training
- URL: http://arxiv.org/abs/2409.14940v1
- Date: Mon, 23 Sep 2024 11:46:38 GMT
- Title: Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training
- Authors: Jinpeng Lin, Xulei Yang, Tianrui Li, Xun Xu,
- Abstract summary: 3D point cloud deep learning model is vulnerable to adversarial attacks.
adversarial purification employs generative model to mitigate the impact of adversarial attacks.
We propose a test-time purified self-training strategy to achieve this objective.
- Score: 9.072521170921712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing 3D point cloud plays a pivotal role in many real-world applications. However, deploying 3D point cloud deep learning model is vulnerable to adversarial attacks. Despite many efforts into developing robust model by adversarial training, they may become less effective against emerging attacks. This limitation motivates the development of adversarial purification which employs generative model to mitigate the impact of adversarial attacks. In this work, we highlight the remaining challenges from two perspectives. First, the purification based method requires retraining the classifier on purified samples which introduces additional computation overhead. Moreover, in a more realistic scenario, testing samples arrives in a streaming fashion and adversarial samples are not isolated from clean samples. These challenges motivates us to explore dynamically update model upon observing testing samples. We proposed a test-time purified self-training strategy to achieve this objective. Adaptive thresholding and feature distribution alignment are introduced to improve the robustness of self-training. Extensive results on different adversarial attacks suggest the proposed method is complementary to purification based method in handling continually changing adversarial attacks on the testing data stream.
Related papers
- Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - Confidence-driven Sampling for Backdoor Attacks [49.72680157684523]
Backdoor attacks aim to surreptitiously insert malicious triggers into DNN models, granting unauthorized control during testing scenarios.
Existing methods lack robustness against defense strategies and predominantly focus on enhancing trigger stealthiness while randomly selecting poisoned samples.
We introduce a straightforward yet highly effective sampling methodology that leverages confidence scores. Specifically, it selects samples with lower confidence scores, significantly increasing the challenge for defenders in identifying and countering these attacks.
arXiv Detail & Related papers (2023-10-08T18:57:36Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - 3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation [11.198650616143219]
We show that training with adversarial samples can improve the performance of neural networks on 3D point cloud completion tasks.
We propose a novel approach to generate adversarial samples that benefit both the performance of clean and adversarial samples.
Experimental results show that training with the adversarial samples crafted by our method effectively enhances the performance of PCN on the ShapeNet dataset.
arXiv Detail & Related papers (2021-09-21T13:16:46Z) - Improving Gradient-based Adversarial Training for Text Classification by
Contrastive Learning and Auto-Encoder [18.375585982984845]
We focus on enhancing the model's ability to defend gradient-based adversarial attack during the model's training process.
We propose two novel adversarial training approaches: CARL and RAR.
Experiments show that the proposed two approaches outperform strong baselines on various text classification datasets.
arXiv Detail & Related papers (2021-09-14T09:08:58Z) - Achieving Model Robustness through Discrete Adversarial Training [30.845326360305677]
We leverage discrete adversarial attacks for online augmentation, where adversarial examples are generated at every step.
We find that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation.
Online augmentation with search-based attacks justifies the higher training cost, significantly improving robustness on three datasets.
arXiv Detail & Related papers (2021-04-11T17:49:21Z) - Robust Ensemble Model Training via Random Layer Sampling Against
Adversarial Attack [38.1887818626171]
We propose an ensemble model training framework with random layer sampling to improve the robustness of deep neural networks.
In the proposed training framework, we generate various sampled model through the random layer sampling and update the weight of the sampled model.
After the ensemble models are trained, it can hide the gradient efficiently and avoid the gradient-based attack.
arXiv Detail & Related papers (2020-05-21T16:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.