A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
Against Adversarial Attacks
- URL: http://arxiv.org/abs/2310.00633v1
- Date: Sun, 1 Oct 2023 10:16:33 GMT
- Title: A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
Against Adversarial Attacks
- Authors: Yanjie Li, Bin Xie, Songtao Guo, Yuanyuan Yang, Bin Xiao
- Abstract summary: Deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks.
We first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks.
We are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications.
- Score: 22.054275309336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benefiting from the rapid development of deep learning, 2D and 3D computer
vision applications are deployed in many safe-critical systems, such as
autopilot and identity authentication. However, deep learning models are not
trustworthy enough because of their limited robustness against adversarial
attacks. The physically realizable adversarial attacks further pose fatal
threats to the application and human safety. Lots of papers have emerged to
investigate the robustness and safety of deep learning models against
adversarial attacks. To lead to trustworthy AI, we first construct a general
threat model from different perspectives and then comprehensively review the
latest progress of both 2D and 3D adversarial attacks. We extend the concept of
adversarial examples beyond imperceptive perturbations and collate over 170
papers to give an overview of deep learning model robustness against various
adversarial attacks. To the best of our knowledge, we are the first to
systematically investigate adversarial attacks for 3D models, a flourishing
field applied to many real-world applications. In addition, we examine physical
adversarial attacks that lead to safety violations. Last but not least, we
summarize present popular topics, give insights on challenges, and shed light
on future research on trustworthy AI.
Related papers
- Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Deep Learning Model Security: Threats and Defenses [25.074630770554105]
Deep learning has transformed AI applications but faces critical security challenges.
This survey examines these vulnerabilities, detailing their mechanisms and impact on model integrity and confidentiality.
The survey concludes with future directions, emphasizing automated defenses, zero-trust architectures, and the security challenges of large AI models.
arXiv Detail & Related papers (2024-12-12T06:04:20Z) - Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey [21.4046846701173]
Adversarial attacks pose significant security threats during machine learning inference.
Existing reviews often focus on attack classifications and lack comprehensive, in-depth analysis.
This article addresses these gaps by offering a thorough summary of traditional and LVLM adversarial attacks.
arXiv Detail & Related papers (2024-10-31T07:22:51Z) - Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks [11.830908033835728]
We argue that overly permissive attack and overly restrictive defensive threat models have hampered defense development in the ML domain.
We analyze adversarial machine learning from a system security perspective rather than an AI perspective.
arXiv Detail & Related papers (2024-10-15T21:33:23Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles [21.894836150974093]
In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
arXiv Detail & Related papers (2021-08-06T04:52:09Z) - Perceptual Adversarial Robustness: Defense Against Unseen Threat Models [58.47179090632039]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception.
Under the neural perceptual threat model, we develop novel perceptual adversarial attacks and defenses.
Because the NPTM is very broad, we find that Perceptual Adrial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks.
arXiv Detail & Related papers (2020-06-22T22:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.