"Real Attackers Don't Compute Gradients": Bridging the Gap Between
Adversarial ML Research and Practice
- URL: http://arxiv.org/abs/2212.14315v1
- Date: Thu, 29 Dec 2022 14:11:07 GMT
- Title: "Real Attackers Don't Compute Gradients": Bridging the Gap Between
Adversarial ML Research and Practice
- Authors: Giovanni Apruzzese, Hyrum S. Anderson, Savino Dambra, David Freeman,
Fabio Pierazzi, Kevin A. Roundy
- Abstract summary: Motivated by the apparent gap between researchers and practitioners, this paper aims to bridge the two domains.
We first present three real-world case studies from which we can glean practical insights unknown or neglected in research.
Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots.
- Score: 10.814642396601139
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent years have seen a proliferation of research on adversarial machine
learning. Numerous papers demonstrate powerful algorithmic attacks against a
wide variety of machine learning (ML) models, and numerous other papers propose
defenses that can withstand most attacks. However, abundant real-world evidence
suggests that actual attackers use simple tactics to subvert ML-driven systems,
and as a result security practitioners have not prioritized adversarial ML
defenses.
Motivated by the apparent gap between researchers and practitioners, this
position paper aims to bridge the two domains. We first present three
real-world case studies from which we can glean practical insights unknown or
neglected in research. Next we analyze all adversarial ML papers recently
published in top security conferences, highlighting positive trends and blind
spots. Finally, we state positions on precise and cost-driven threat modeling,
collaboration between industry and academia, and reproducible research. We
believe that our positions, if adopted, will increase the real-world impact of
future endeavours in adversarial ML, bringing both researchers and
practitioners closer to their shared goal of improving the security of ML
systems.
Related papers
- The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense [56.32083100401117]
We investigate why Vision Large Language Models (VLLMs) are prone to jailbreak attacks.
We then make a key observation: existing defense mechanisms suffer from an textbfover-prudence problem.
We find that the two representative evaluation methods for jailbreak often exhibit chance agreement.
arXiv Detail & Related papers (2024-11-13T07:57:19Z) - A Survey on Adversarial Machine Learning for Code Data: Realistic Threats, Countermeasures, and Interpretations [21.855757118482995]
Code Language Models (CLMs) have achieved tremendous progress in source code understanding and generation.
In realistic scenarios, CLMs are exposed to potential malicious adversaries, bringing risks to the confidentiality, integrity, and availability of CLM systems.
Despite these risks, a comprehensive analysis of the security vulnerabilities of CLMs in the extremely adversarial environment has been lacking.
arXiv Detail & Related papers (2024-11-12T07:16:20Z) - A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends [78.3201480023907]
Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across a wide range of multimodal understanding and reasoning tasks.
The vulnerability of LVLMs is relatively underexplored, posing potential security risks in daily usage.
In this paper, we provide a comprehensive review of the various forms of existing LVLM attacks.
arXiv Detail & Related papers (2024-07-10T06:57:58Z) - A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models [0.0]
This article explores two attack categories: attacks on models themselves and attacks on model applications.
The former requires expertise, access to model data, and significant implementation time.
The latter is more accessible to attackers and has seen increased attention.
arXiv Detail & Related papers (2023-12-18T07:07:32Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning [1.6574413179773757]
adversarial attacks aim to trick ML models into producing faulty predictions.
adversarial attacks can compromise ML-based NIDSs.
Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks.
arXiv Detail & Related papers (2023-06-08T18:32:08Z) - Review on the Feasibility of Adversarial Evasion Attacks and Defenses
for Network Intrusion Detection Systems [0.7829352305480285]
Recent research raises many concerns in the cybersecurity field.
An increasing number of researchers are studying the feasibility of such attacks on security systems based on machine learning algorithms.
arXiv Detail & Related papers (2023-03-13T11:00:05Z) - Attacks in Adversarial Machine Learning: A Systematic Survey from the
Life-cycle Perspective [69.25513235556635]
Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans.
Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system.
We propose a unified mathematical framework to covering existing attack paradigms.
arXiv Detail & Related papers (2023-02-19T02:12:21Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.