Passive Defense Against 3D Adversarial Point Clouds Through the Lens of
3D Steganalysis
- URL: http://arxiv.org/abs/2205.08738v1
- Date: Wed, 18 May 2022 06:19:15 GMT
- Title: Passive Defense Against 3D Adversarial Point Clouds Through the Lens of
3D Steganalysis
- Authors: Jiahao Zhu
- Abstract summary: A 3D adversarial point cloud detector is designed through the lens of 3D steganalysis.
To our knowledge, this work is the first to apply 3D steganalysis to 3D adversarial example defense.
- Score: 1.14219428942199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, 3D data plays an indelible role in the computer vision field.
However, extensive studies have proved that deep neural networks (DNNs) fed
with 3D data, such as point clouds, are susceptible to adversarial examples,
which aim to misguide DNNs and might bring immeasurable losses. Currently, 3D
adversarial point clouds are chiefly generated in three fashions, i.e., point
shifting, point adding, and point dropping. These point manipulations would
modify geometrical properties and local correlations of benign point clouds
more or less. Motivated by this basic fact, we propose to defend such
adversarial examples with the aid of 3D steganalysis techniques. Specifically,
we first introduce an adversarial attack and defense model adapted from the
celebrated Prisoners' Problem in steganography to help us comprehend 3D
adversarial attack and defense more generally. Then we rethink two significant
but vague concepts in the field of adversarial example, namely, active defense
and passive defense, from the perspective of steganalysis. Most importantly, we
design a 3D adversarial point cloud detector through the lens of 3D
steganalysis. Our detector is double-blind, that is to say, it does not rely on
the exact knowledge of the adversarial attack means and victim models. To
enable the detector to effectively detect malicious point clouds, we craft a
64-D discriminant feature set, including features related to first-order and
second-order local descriptions of point clouds. To our knowledge, this work is
the first to apply 3D steganalysis to 3D adversarial example defense. Extensive
experimental results demonstrate that the proposed 3D adversarial point cloud
detector can achieve good detection performance on multiple types of 3D
adversarial point clouds.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks
for Defending Adversarial Examples [25.029854308139853]
adversarial examples on 3D point clouds make them more challenging to defend against than those on 2D images.
In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark.
We then perform extensive and systematic experiments to identify an effective combination of these tricks.
We construct a more robust defense framework achieving an average accuracy of 83.45% against various attacks.
arXiv Detail & Related papers (2023-07-31T01:34:24Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition [29.840946461846]
3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
arXiv Detail & Related papers (2022-08-21T04:49:17Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.