TPC: Transformation-Specific Smoothing for Point Cloud Models
- URL: http://arxiv.org/abs/2201.12733v5
- Date: Sat, 6 May 2023 09:43:47 GMT
- Title: TPC: Transformation-Specific Smoothing for Point Cloud Models
- Authors: Wenda Chu, Linyi Li, Bo Li
- Abstract summary: We propose a transformation-specific smoothing framework TPC, which provides robustness guarantees for point cloud models against semantic transformation attacks.
Experiments on several common 3D transformations show that TPC significantly outperforms the state of the art.
- Score: 9.289813586197882
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point cloud models with neural network architectures have achieved great
success and have been widely used in safety-critical applications, such as
Lidar-based recognition systems in autonomous vehicles. However, such models
are shown vulnerable to adversarial attacks which aim to apply stealthy
semantic transformations such as rotation and tapering to mislead model
predictions. In this paper, we propose a transformation-specific smoothing
framework TPC, which provides tight and scalable robustness guarantees for
point cloud models against semantic transformation attacks. We first categorize
common 3D transformations into three categories: additive (e.g., shearing),
composable (e.g., rotation), and indirectly composable (e.g., tapering), and we
present generic robustness certification strategies for all categories
respectively. We then specify unique certification protocols for a range of
specific semantic transformations and their compositions. Extensive experiments
on several common 3D transformations show that TPC significantly outperforms
the state of the art. For example, our framework boosts the certified accuracy
against twisting transformation along z-axis (within 20$^\circ$) from 20.3$\%$
to 83.8$\%$. Codes and models are available at
https://github.com/chuwd19/Point-Cloud-Smoothing.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing [19.73918716354272]
We propose an efficient point TransFormer with Dynamic Token Aggregating (DTA-Former) for point cloud representation and processing.
It achieves SOTA performance with up to 30$times$ faster than prior point Transformers on ModelNet40, ShapeNet, and airborne MultiSpectral LiDAR (MS-LiDAR) datasets.
arXiv Detail & Related papers (2024-05-23T20:50:50Z) - Learning Modulated Transformation in GANs [69.95217723100413]
We equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM)
MTM predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations.
It is noteworthy that towards human generation on the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to 13.60, demonstrating the efficacy of learning modulated geometry transformation.
arXiv Detail & Related papers (2023-08-29T17:51:22Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing [40.38555458216436]
We propose a unified theoretical framework for certifying robustness against general semantic transformations.
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
arXiv Detail & Related papers (2022-06-09T07:12:17Z) - 3DeformRS: Certifying Spatial Deformations on Point Clouds [61.62846778591536]
3D computer vision models are commonly used in security-critical applications such as autonomous driving and surgical robotics.
Emerging concerns over the robustness of these models against real-world deformations must be addressed practically and reliably.
We propose 3DeformRS, a method to certify the robustness of point cloud Deep Neural Networks (DNNs) against real-world deformations.
arXiv Detail & Related papers (2022-04-12T10:24:31Z) - 3DCTN: 3D Convolution-Transformer Network for Point Cloud Classification [23.0009969537045]
This paper presents a novel hierarchical framework that incorporates convolution with Transformer for point cloud classification.
Our method achieves state-of-the-art classification performance, in terms of both accuracy and efficiency.
arXiv Detail & Related papers (2022-03-02T02:42:14Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z) - Robustness Certification for Point Cloud Models [10.843109238068982]
We introduce 3DCertify, the first verifier able to certify robustness of point cloud models.
3DCertify is based on two key insights: (i) a generic relaxation based on first-order Taylor approximations, and (ii) a precise relaxation for global feature pooling.
We demonstrate the effectiveness of 3DCertify by performing an extensive evaluation on a wide range of 3D transformations.
arXiv Detail & Related papers (2021-03-30T19:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.