ScAR: Scaling Adversarial Robustness for LiDAR Object Detection
- URL: http://arxiv.org/abs/2312.03085v2
- Date: Mon, 4 Mar 2024 19:20:24 GMT
- Title: ScAR: Scaling Adversarial Robustness for LiDAR Object Detection
- Authors: Xiaohu Lu and Hayder Radha
- Abstract summary: Adversarial robustness of a model is its ability to resist adversarial attacks.
We present a black-box scaling adversarial attack method for LiDAR object detection.
- Score: 6.472434306724611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adversarial robustness of a model is its ability to resist adversarial
attacks in the form of small perturbations to input data. Universal adversarial
attack methods such as Fast Sign Gradient Method (FSGM) and Projected Gradient
Descend (PGD) are popular for LiDAR object detection, but they are often
deficient compared to task-specific adversarial attacks. Additionally, these
universal methods typically require unrestricted access to the model's
information, which is difficult to obtain in real-world applications. To
address these limitations, we present a black-box Scaling Adversarial
Robustness (ScAR) method for LiDAR object detection. By analyzing the
statistical characteristics of 3D object detection datasets such as KITTI,
Waymo, and nuScenes, we have found that the model's prediction is sensitive to
scaling of 3D instances. We propose three black-box scaling adversarial attack
methods based on the available information: model-aware attack,
distribution-aware attack, and blind attack. We also introduce a strategy for
generating scaling adversarial examples to improve the model's robustness
against these three scaling adversarial attacks. Comparison with other methods
on public datasets under different 3D object detection architectures
demonstrates the effectiveness of our proposed method. Our code is available at
https://github.com/xiaohulugo/ScAR-IROS2023.
Related papers
- Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene [22.297964850282177]
We propose LiDAR-2D Self-paced Learning (LiSe) for unsupervised 3D detection.
RGB images serve as a valuable complement to LiDAR data, offering precise 2D localization cues.
Our framework devises a self-paced learning pipeline that incorporates adaptive sampling and weak model aggregation strategies.
arXiv Detail & Related papers (2024-07-11T14:58:49Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
Scientific Plots [14.998272283348152]
It is well known that an adversary can leverage a target ML model's output to steal the model's information.
We propose a new side channel for model information stealing attacks, i.e., models' scientific plots.
arXiv Detail & Related papers (2023-02-23T12:57:34Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - RamBoAttack: A Robust Query Efficient Deep Neural Network Decision
Exploit [9.93052896330371]
We develop a robust query efficient attack capable of avoiding entrapment in a local minimum and misdirection from noisy gradients.
The RamBoAttack is more robust to the different sample inputs available to an adversary and the targeted class.
arXiv Detail & Related papers (2021-12-10T01:25:24Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.