Towards more Practical Threat Models in Artificial Intelligence Security
- URL: http://arxiv.org/abs/2311.09994v2
- Date: Tue, 26 Mar 2024 13:06:28 GMT
- Title: Towards more Practical Threat Models in Artificial Intelligence Security
- Authors: Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Alexandre Alahi,
- Abstract summary: Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
- Score: 66.67624011455423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI. For example, while models are often studied in isolation, they form part of larger ML pipelines in practice. Recent works also brought forward that adversarial manipulations introduced by academic attacks are impractical. We take a first step towards describing the full extent of this disparity. To this end, we revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice via a survey with 271 industrial practitioners. On the one hand, we find that all existing threat models are indeed applicable. On the other hand, there are significant mismatches: research is often too generous with the attacker, assuming access to information not frequently available in real-world settings. Our paper is thus a call for action to study more practical threat models in artificial intelligence security.
Related papers
- Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.
To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
Against Adversarial Attacks [22.054275309336]
Deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks.
We first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks.
We are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications.
arXiv Detail & Related papers (2023-10-01T10:16:33Z) - "Real Attackers Don't Compute Gradients": Bridging the Gap Between
Adversarial ML Research and Practice [10.814642396601139]
Motivated by the apparent gap between researchers and practitioners, this paper aims to bridge the two domains.
We first present three real-world case studies from which we can glean practical insights unknown or neglected in research.
Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots.
arXiv Detail & Related papers (2022-12-29T14:11:07Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.