PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples
- URL: http://arxiv.org/abs/2211.12294v1
- Date: Tue, 22 Nov 2022 14:15:41 GMT
- Title: PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples
- Authors: Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu
Zhang, Hai Jin, Lichao Sun
- Abstract summary: We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
- Score: 63.84378007819262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud completion, as the upstream procedure of 3D recognition and
segmentation, has become an essential part of many tasks such as navigation and
scene understanding. While various point cloud completion models have
demonstrated their powerful capabilities, their robustness against adversarial
attacks, which have been proven to be fatally malicious towards deep neural
networks, remains unknown. In addition, existing attack approaches towards
point cloud classifiers cannot be applied to the completion models due to
different output forms and attack purposes. In order to evaluate the robustness
of the completion models, we propose PointCA, the first adversarial attack
against 3D point cloud completion models. PointCA can generate adversarial
point clouds that maintain high similarity with the original ones, while being
completed as another object with totally different semantic information.
Specifically, we minimize the representation discrepancy between the
adversarial example and the target point set to jointly explore the adversarial
point clouds in the geometry space and the feature space. Furthermore, to
launch a stealthier attack, we innovatively employ the neighbourhood density
information to tailor the perturbation constraint, leading to geometry-aware
and distribution-adaptive modifications for each point. Extensive experiments
against different premier point cloud completion networks show that PointCA can
cause a performance degradation from 77.9% to 16.7%, with the structure chamfer
distance kept below 0.01. We conclude that existing completion models are
severely vulnerable to adversarial examples, and state-of-the-art defenses for
point cloud classification will be partially invalid when applied to incomplete
and uneven point cloud data.
Related papers
- Bridging Domain Gap of Point Cloud Representations via Self-Supervised Geometric Augmentation [15.881442863961531]
We introduce a novel scheme for induced geometric invariance of point cloud representations across domains.
On one hand, a novel pretext task of predicting translation of distances of augmented samples is proposed to alleviate centroid shift of point clouds.
On the other hand, we pioneer an integration of the relational self-supervised learning on geometrically-augmented point clouds.
arXiv Detail & Related papers (2024-09-11T02:39:19Z) - Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Zero-shot Point Cloud Completion Via 2D Priors [52.72867922938023]
3D point cloud completion is designed to recover complete shapes from partially observed point clouds.
We propose a zero-shot framework aimed at completing partially observed point clouds across any unseen categories.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - PCV: A Point Cloud-Based Network Verifier [8.239631885389382]
We describe a point cloud-based network verifier that successfully deals state of the art 3D PointNet.
We calculate the impact on model accuracy versus property factor and can test PointNet network's robustness against a small collection of perturbing input states.
arXiv Detail & Related papers (2023-01-27T15:58:54Z) - PointCAT: Contrastive Adversarial Training for Robust Point Cloud
Recognition [111.55944556661626]
We propose Point-Cloud Contrastive Adversarial Training (PointCAT) to boost the robustness of point cloud recognition models.
We leverage a supervised contrastive loss to facilitate the alignment and uniformity of the hypersphere features extracted by the recognition model.
To provide the more challenging corrupted point clouds, we adversarially train a noise generator along with the recognition model from the scratch.
arXiv Detail & Related papers (2022-09-16T08:33:04Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - PointGuard: Provably Robust 3D Point Cloud Classification [30.954481481297563]
3D point cloud classification has many safety-critical applications such as autonomous driving and robotic grasping.
In particular, an attacker can make a classifier predict an incorrect label for a 3D point cloud via carefully modifying, adding, and/or deleting a small number of its points.
We propose PointGuard, the first defense that has provable robustness guarantees against adversarially modified, added, and/or deleted points.
arXiv Detail & Related papers (2021-03-04T14:09:37Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.