PointBA: Towards Backdoor Attacks in 3D Point Cloud
- URL: http://arxiv.org/abs/2103.16074v1
- Date: Tue, 30 Mar 2021 04:49:25 GMT
- Title: PointBA: Towards Backdoor Attacks in 3D Point Cloud
- Authors: Xinke Li, Zhiru Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim,
Joey Tianyi Zhou
- Abstract summary: We present the backdoor attacks in 3D with a unified framework that exploits the unique properties of 3D data and networks.
Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.
- Score: 31.210502946247498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D deep learning has been increasingly more popular for a variety of tasks
including many safety-critical applications. However, recently several works
raise the security issues of 3D deep nets. Although most of these works
consider adversarial attacks, we identify that backdoor attack is indeed a more
serious threat to 3D deep learning systems but remains unexplored. We present
the backdoor attacks in 3D with a unified framework that exploits the unique
properties of 3D data and networks. In particular, we design two attack
approaches: the poison-label attack and the clean-label attack. The first one
is straightforward and effective in practice, while the second one is more
sophisticated assuming there are certain data inspections. The attack
algorithms are mainly motivated and developed by 1) the recent discovery of 3D
adversarial samples which demonstrate the vulnerability of 3D deep nets under
spatial transformations; 2) the proposed feature disentanglement technique that
manipulates the feature of the data through optimization methods and its
potential to embed a new task. Extensive experiments show the efficacy of the
poison-label attack with over 95% success rate across several 3D datasets and
models, and the ability of clean-label attack against data filtering with
around 50% success rate. Our proposed backdoor attack in 3D point cloud is
expected to perform as a baseline for improving the robustness of 3D deep
models.
Related papers
- Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.
The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.
Such a computation cost attack is achieved by addressing a bi-level optimization problem.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - Toward Availability Attacks in 3D Point Clouds [28.496421433836908]
We show that extending 2D availability attacks directly to 3D point clouds under distance regularization is susceptible to the degeneracy.
We propose a novel Feature Collision Error-Minimization (FC-EM) method, which creates additional shortcuts in the feature space.
Experiments on typical point cloud datasets, 3D intracranial aneurysm medical dataset, and 3D face dataset verify the superiority and practicality of our approach.
arXiv Detail & Related papers (2024-06-26T08:13:30Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D
Point Cloud Attack [64.83391236611409]
We propose a novel 3D attack method to generate adversarial samples solely with the knowledge of class labels.
Even in the challenging hard-label setting, 3DHacker still competitively outperforms existing 3D attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2023-08-15T03:29:31Z) - 3D-IDS: Doubly Disentangled Dynamic Intrusion Detection [24.293504468229678]
Network-based intrusion detection system (NIDS) monitors network traffic for malicious activities.
Existing methods perform inconsistently in declaring various unknown attacks or detecting diverse known attacks.
We propose 3D-IDS, a novel method that aims to tackle the above issues through two-step feature disentanglements and a dynamic graph diffusion scheme.
arXiv Detail & Related papers (2023-07-02T00:26:26Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition [29.840946461846]
3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
arXiv Detail & Related papers (2022-08-21T04:49:17Z) - Passive Defense Against 3D Adversarial Point Clouds Through the Lens of
3D Steganalysis [1.14219428942199]
A 3D adversarial point cloud detector is designed through the lens of 3D steganalysis.
To our knowledge, this work is the first to apply 3D steganalysis to 3D adversarial example defense.
arXiv Detail & Related papers (2022-05-18T06:19:15Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.