PointACL:Adversarial Contrastive Learning for Robust Point Clouds
Representation under Adversarial Attack
- URL: http://arxiv.org/abs/2209.06971v1
- Date: Wed, 14 Sep 2022 22:58:31 GMT
- Title: PointACL:Adversarial Contrastive Learning for Robust Point Clouds
Representation under Adversarial Attack
- Authors: Junxuan Huang, Yatong An, Lu cheng, Bai Chen, Junsong Yuan, Chunming
Qiao
- Abstract summary: Adversarial contrastive learning (ACL) is considered an effective way to improve the robustness of pre-trained models.
We present our robust aware loss function to train self-supervised contrastive learning framework adversarially.
We validate our method, PointACL on downstream tasks, including 3D classification and 3D segmentation with multiple datasets.
- Score: 73.3371797787823
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Despite recent success of self-supervised based contrastive learning model
for 3D point clouds representation, the adversarial robustness of such
pre-trained models raised concerns. Adversarial contrastive learning (ACL) is
considered an effective way to improve the robustness of pre-trained models. In
contrastive learning, the projector is considered an effective component for
removing unnecessary feature information during contrastive pretraining and
most ACL works also use contrastive loss with projected feature representations
to generate adversarial examples in pretraining, while "unprojected " feature
representations are used in generating adversarial inputs during
inference.Because of the distribution gap between projected and "unprojected"
features, their models are constrained of obtaining robust feature
representations for downstream tasks. We introduce a new method to generate
high-quality 3D adversarial examples for adversarial training by utilizing
virtual adversarial loss with "unprojected" feature representations in
contrastive learning framework. We present our robust aware loss function to
train self-supervised contrastive learning framework adversarially.
Furthermore, we find selecting high difference points with the Difference of
Normal (DoN) operator as additional input for adversarial self-supervised
contrastive learning can significantly improve the adversarial robustness of
the pre-trained model. We validate our method, PointACL on downstream tasks,
including 3D classification and 3D segmentation with multiple datasets. It
obtains comparable robust accuracy over state-of-the-art contrastive
adversarial learning methods.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Masked Scene Contrast: A Scalable Framework for Unsupervised 3D
Representation Learning [37.155772047656114]
Masked Scene Contrast (MSC) framework is capable of extracting comprehensive 3D representations more efficiently and effectively.
MSC also enables large-scale 3D pre-training across multiple datasets.
arXiv Detail & Related papers (2023-03-24T17:59:58Z) - Guided Point Contrastive Learning for Semi-supervised Point Cloud
Semantic Segmentation [90.2445084743881]
We present a method for semi-supervised point cloud semantic segmentation to adopt unlabeled point clouds in training to boost the model performance.
Inspired by the recent contrastive loss in self-supervised tasks, we propose the guided point contrastive loss to enhance the feature representation and model generalization ability.
arXiv Detail & Related papers (2021-10-15T16:38:54Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.