PointCert: Point Cloud Classification with Deterministic Certified
Robustness Guarantees
- URL: http://arxiv.org/abs/2303.01959v1
- Date: Fri, 3 Mar 2023 14:32:48 GMT
- Title: PointCert: Point Cloud Classification with Deterministic Certified
Robustness Guarantees
- Authors: Jinghuai Zhang and Jinyuan Jia and Hongbin Liu and Neil Zhenqiang Gong
- Abstract summary: Point cloud classification is an essential component in many security-critical applications such as autonomous driving and augmented reality.
Existing certified defenses against adversarial point clouds suffer from a key limitation: their certified robustness guarantees are probabilistic.
We propose a general framework, namely PointCert, that can transform an arbitrary point cloud classifier to be certifiably robust against adversarial point clouds.
- Score: 63.85677512968049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud classification is an essential component in many
security-critical applications such as autonomous driving and augmented
reality. However, point cloud classifiers are vulnerable to adversarially
perturbed point clouds. Existing certified defenses against adversarial point
clouds suffer from a key limitation: their certified robustness guarantees are
probabilistic, i.e., they produce an incorrect certified robustness guarantee
with some probability. In this work, we propose a general framework, namely
PointCert, that can transform an arbitrary point cloud classifier to be
certifiably robust against adversarial point clouds with deterministic
guarantees. PointCert certifiably predicts the same label for a point cloud
when the number of arbitrarily added, deleted, and/or modified points is less
than a threshold. Moreover, we propose multiple methods to optimize the
certified robustness guarantees of PointCert in three application scenarios. We
systematically evaluate PointCert on ModelNet and ScanObjectNN benchmark
datasets. Our results show that PointCert substantially outperforms
state-of-the-art certified defenses even though their robustness guarantees are
probabilistic.
Related papers
- Boosting Certificate Robustness for Time Series Classification with Efficient Self-Ensemble [10.63844868166531]
Randomized Smoothing has emerged as a standout method due to its ability to certify a provable lower bound on robustness radius under $ell_p$-ball attacks.
We propose a self-ensemble method to enhance the lower bound of the probability confidence of predicted labels by reducing the variance of classification margins.
This approach also addresses the computational overhead issue of Deep Ensemble(DE) while remaining competitive and, in some cases, outperforming it in terms of robustness.
arXiv Detail & Related papers (2024-09-04T15:22:08Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - Reliability-Adaptive Consistency Regularization for Weakly-Supervised
Point Cloud Segmentation [80.07161039753043]
Weakly-supervised point cloud segmentation with extremely limited labels is desirable to alleviate the expensive costs of collecting densely annotated 3D points.
This paper explores applying the consistency regularization that is commonly used in weakly-supervised learning, for its point cloud counterpart with multiple data-specific augmentations.
We propose a novel Reliability-Adaptive Consistency Network (RAC-Net) to use both prediction confidence and model uncertainty to measure the reliability of pseudo labels.
arXiv Detail & Related papers (2023-03-09T10:41:57Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - PointCAT: Contrastive Adversarial Training for Robust Point Cloud
Recognition [111.55944556661626]
We propose Point-Cloud Contrastive Adversarial Training (PointCAT) to boost the robustness of point cloud recognition models.
We leverage a supervised contrastive loss to facilitate the alignment and uniformity of the hypersphere features extracted by the recognition model.
To provide the more challenging corrupted point clouds, we adversarially train a noise generator along with the recognition model from the scratch.
arXiv Detail & Related papers (2022-09-16T08:33:04Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - PointGuard: Provably Robust 3D Point Cloud Classification [30.954481481297563]
3D point cloud classification has many safety-critical applications such as autonomous driving and robotic grasping.
In particular, an attacker can make a classifier predict an incorrect label for a 3D point cloud via carefully modifying, adding, and/or deleting a small number of its points.
We propose PointGuard, the first defense that has provable robustness guarantees against adversarially modified, added, and/or deleted points.
arXiv Detail & Related papers (2021-03-04T14:09:37Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.