Exploring the Physical World Adversarial Robustness of Vehicle Detection
- URL: http://arxiv.org/abs/2308.03476v1
- Date: Mon, 7 Aug 2023 11:09:12 GMT
- Title: Exploring the Physical World Adversarial Robustness of Vehicle Detection
- Authors: Wei Jiang, Tianyuan Zhang, Shuangcheng Liu, Weiyu Ji, Zichao Zhang,
Gang Xiao
- Abstract summary: Adrial attacks can compromise the robustness of real-world detection models.
We propose an innovative instant-level data generation pipeline using the CARLA simulator.
Our findings highlight diverse model performances under adversarial conditions.
- Score: 13.588120545886229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks can compromise the robustness of real-world detection
models. However, evaluating these models under real-world conditions poses
challenges due to resource-intensive experiments. Virtual simulations offer an
alternative, but the absence of standardized benchmarks hampers progress.
Addressing this, we propose an innovative instant-level data generation
pipeline using the CARLA simulator. Through this pipeline, we establish the
Discrete and Continuous Instant-level (DCI) dataset, enabling comprehensive
experiments involving three detection models and three physical adversarial
attacks. Our findings highlight diverse model performances under adversarial
conditions. Yolo v6 demonstrates remarkable resilience, experiencing just a
marginal 6.59% average drop in average precision (AP). In contrast, the ASA
attack yields a substantial 14.51% average AP reduction, twice the effect of
other algorithms. We also note that static scenes yield higher recognition AP
values, and outcomes remain relatively consistent across varying weather
conditions. Intriguingly, our study suggests that advancements in adversarial
attack algorithms may be approaching its ``limitation''.In summary, our work
underscores the significance of adversarial attacks in real-world contexts and
introduces the DCI dataset as a versatile benchmark. Our findings provide
valuable insights for enhancing the robustness of detection models and offer
guidance for future research endeavors in the realm of adversarial attacks.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training [9.072521170921712]
3D point cloud deep learning model is vulnerable to adversarial attacks.
adversarial purification employs generative model to mitigate the impact of adversarial attacks.
We propose a test-time purified self-training strategy to achieve this objective.
arXiv Detail & Related papers (2024-09-23T11:46:38Z) - Benchmarking and Improving Bird's Eye View Perception Robustness in Autonomous Driving [55.93813178692077]
We present RoboBEV, an extensive benchmark suite designed to evaluate the resilience of BEV algorithms.
We assess 33 state-of-the-art BEV-based perception models spanning tasks like detection, map segmentation, depth estimation, and occupancy prediction.
Our experimental results also underline the efficacy of strategies like pre-training and depth-free BEV transformations in enhancing robustness against out-of-distribution data.
arXiv Detail & Related papers (2024-05-27T17:59:39Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via
Latent Ensemble Attack [11.764601181046496]
Deepfakes, malicious visual contents created by generative models, pose an increasingly harmful threat to society.
To proactively mitigate deepfake damages, recent studies have employed adversarial perturbation to disrupt deepfake model outputs.
We propose a simple yet effective disruption method called Latent Ensemble ATtack (LEAT), which attacks the independent latent encoding process.
arXiv Detail & Related papers (2023-07-04T07:00:37Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Benchmarking the Physical-world Adversarial Robustness of Vehicle
Detection [14.202833467294765]
Adversarial attacks in the physical world can harm the robustness of detection models.
Yolo v6 had strongest resistance, with only a 6.59% average AP drop, and ASA was the most effective attack algorithm with a 14.51% average AP reduction.
arXiv Detail & Related papers (2023-04-11T09:48:25Z) - Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models [4.286570387250455]
deep learning (DL) models can effectively learn city-wide crowd-flow patterns.
DL models have been known to perform poorly on inconspicuous adversarial perturbations.
arXiv Detail & Related papers (2023-03-05T13:30:25Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.