Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object
Detection Models
- URL: http://arxiv.org/abs/2101.10747v2
- Date: Sun, 31 Jan 2021 18:40:27 GMT
- Title: Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object
Detection Models
- Authors: Mazen Abdelfattah, Kaiwen Yuan, Z. Jane Wang, Rabab Ward
- Abstract summary: We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN)
We show that the proposed universal multi-modal attack was successful in reducing the model's ability to detect a car by nearly 73%.
- Score: 16.7400223249581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a universal and physically realizable adversarial attack on a
cascaded multi-modal deep learning network (DNN), in the context of
self-driving cars. DNNs have achieved high performance in 3D object detection,
but they are known to be vulnerable to adversarial attacks. These attacks have
been heavily investigated in the RGB image domain and more recently in the
point cloud domain, but rarely in both domains simultaneously - a gap to be
filled in this paper. We use a single 3D mesh and differentiable rendering to
explore how perturbing the mesh's geometry and texture can reduce the
robustness of DNNs to adversarial attacks. We attack a prominent cascaded
multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark,
we showed that the proposed universal multi-modal attack was successful in
reducing the model's ability to detect a car by nearly 73%. This work can aid
in the understanding of what the cascaded RGB-point cloud DNN learns and its
vulnerability to adversarial attacks.
Related papers
- A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks [0.5849513679510833]
A disadvantage of Deep Neural Networks (DNNs) is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs.
This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework.
arXiv Detail & Related papers (2023-04-25T13:56:35Z) - Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network
in Edge Computing [8.69143545268788]
We propose a novel backdoor attack specifically on the dynamic multi-exit DNN models.
Our backdoor is stealthy to evade multiple state-of-the-art backdoor detection or removal methods.
arXiv Detail & Related papers (2022-12-22T14:43:48Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - PointBA: Towards Backdoor Attacks in 3D Point Cloud [31.210502946247498]
We present the backdoor attacks in 3D with a unified framework that exploits the unique properties of 3D data and networks.
Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.
arXiv Detail & Related papers (2021-03-30T04:49:25Z) - Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection [15.323682536206574]
Most autonomous vehicles rely on LiDAR and RGB camera sensors for perception.
Deep neural nets (DNNs) have achieved state-of-the-art performance in 3D detection.
We propose a universal and physically realizable adversarial attack for each type, and study and contrast their respective vulnerabilities to attacks.
arXiv Detail & Related papers (2021-03-17T05:24:48Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
Neural Network Inference [6.320009081099895]
A slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1.5-5$times$ in a typical IoT deployment.
We show that it is possible to craft universal, reusable perturbations and that the attack can be effective in realistic black-box scenarios.
arXiv Detail & Related papers (2020-10-06T02:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.