Isometric 3D Adversarial Examples in the Physical World
- URL: http://arxiv.org/abs/2210.15291v1
- Date: Thu, 27 Oct 2022 09:58:15 GMT
- Title: Isometric 3D Adversarial Examples in the Physical World
- Authors: Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao
- Abstract summary: 3D deep learning models are shown to be as vulnerable to adversarial examples as 2D models.
Existing attack methods are still far from stealthy and suffer from severe performance degradation in the physical world.
We propose a novel $epsilon$-isometric ($epsilon$-ISO) attack to generate natural and robust 3D adversarial examples.
- Score: 34.291370103424995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D deep learning models are shown to be as vulnerable to adversarial examples
as 2D models. However, existing attack methods are still far from stealthy and
suffer from severe performance degradation in the physical world. Although 3D
data is highly structured, it is difficult to bound the perturbations with
simple metrics in the Euclidean space. In this paper, we propose a novel
$\epsilon$-isometric ($\epsilon$-ISO) attack to generate natural and robust 3D
adversarial examples in the physical world by considering the geometric
properties of 3D objects and the invariance to physical transformations. For
naturalness, we constrain the adversarial example to be $\epsilon$-isometric to
the original one by adopting the Gaussian curvature as a surrogate metric
guaranteed by a theoretical analysis. For invariance to physical
transformations, we propose a maxima over transformation (MaxOT) method that
actively searches for the most harmful transformations rather than random ones
to make the generated adversarial example more robust in the physical world.
Experiments on typical point cloud recognition models validate that our
approach can significantly improve the attack success rate and naturalness of
the generated 3D adversarial examples than the state-of-the-art attack methods.
Related papers
- Physically Compatible 3D Object Modeling from a Single Image [109.98124149566927]
We present a framework that transforms single images into 3D physical objects.
Our framework embeds physical compatibility into the reconstruction process.
It consistently enhances the physical realism of 3D models over existing methods.
arXiv Detail & Related papers (2024-05-30T21:59:29Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - 3D shape reconstruction of semi-transparent worms [0.950214811819847]
3D shape reconstruction typically requires identifying object features or textures in multiple images of a subject.
Here we overcome these challenges by rendering a candidate shape with adaptive blurring and transparency for comparison with the images.
We model the slender Caenorhabditis elegans as a 3D curve using an intrinsic parametrisation that naturally admits biologically-informed constraints and regularisation.
arXiv Detail & Related papers (2023-04-28T13:29:36Z) - Imperceptible and Robust Backdoor Attack in 3D Point Cloud [62.992167285646275]
We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
arXiv Detail & Related papers (2022-08-17T03:53:10Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial
Attacks [28.937800357992906]
We show that existing state-of-the-art deep 3D models are extremely vulnerable to isometry transformations.
We develop a black-box attack with success rate over 95% on ModelNet40 data set.
In contrast to previous works, our adversarial samples are experimentally shown to be strongly transferable.
arXiv Detail & Related papers (2020-02-27T16:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.