"Why do so?" -- A Practical Perspective on Machine Learning Security
- URL: http://arxiv.org/abs/2207.05164v1
- Date: Mon, 11 Jul 2022 19:58:56 GMT
- Title: "Why do so?" -- A Practical Perspective on Machine Learning Security
- Authors: Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista
Biggio, Katharina Krombholz
- Abstract summary: We analyze attack occurrence and concern with 139 industrial practitioners.
Our results shed light on real-world attacks on deployed machine learning.
Our work paves the way for more research about adversarial machine learning in practice.
- Score: 21.538956161215555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the large body of academic work on machine learning security, little
is known about the occurrence of attacks on machine learning systems in the
wild. In this paper, we report on a quantitative study with 139 industrial
practitioners. We analyze attack occurrence and concern and evaluate
statistical hypotheses on factors influencing threat perception and exposure.
Our results shed light on real-world attacks on deployed machine learning. On
the organizational level, while we find no predictors for threat exposure in
our sample, the amount of implement defenses depends on exposure to threats or
expected likelihood to become a target. We also provide a detailed analysis of
practitioners' replies on the relevance of individual machine learning attacks,
unveiling complex concerns like unreliable decision making, business
information leakage, and bias introduction into models. Finally, we find that
on the individual level, prior knowledge about machine learning security
influences threat perception. Our work paves the way for more research about
adversarial machine learning in practice, but yields also insights for
regulation and auditing.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Adversarial Learning in Real-World Fraud Detection: Challenges and
Perspectives [1.5373344688357016]
Fraudulent activities and adversarial attacks threaten machine learning models.
We describe how attacks against fraud detection systems differ from other applications of adversarial machine learning.
arXiv Detail & Related papers (2023-07-03T23:04:49Z) - Adversarial Robustness in Unsupervised Machine Learning: A Systematic
Review [0.0]
This paper conducts a systematic literature review on the robustness of unsupervised learning.
Based on the results, we formulate a model on the properties of an attack on unsupervised learning.
arXiv Detail & Related papers (2023-06-01T13:59:32Z) - White-box Inference Attacks against Centralized Machine Learning and
Federated Learning [0.0]
We evaluate the impact of different neural network layers, gradient, gradient norm, and fine-tuned models on member inference attack performance with prior knowledge.
The results show that the centralized machine learning model shows more serious member information leakage in all aspects.
arXiv Detail & Related papers (2022-12-15T07:07:19Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Adversarial Machine Learning in Text Analysis and Generation [1.116812194101501]
This paper focuses on studying aspects and research trends in adversarial machine learning specifically in text analysis and generation.
The paper summarizes main research trends in the field such as GAN algorithms, models, types of attacks, and defense against those attacks.
arXiv Detail & Related papers (2021-01-14T04:37:52Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - A Survey of Privacy Attacks in Machine Learning [0.7614628596146599]
This research is an analysis of more than 40 papers related to privacy attacks against machine learning.
An initial exploration of the causes of privacy leaks is presented, as well as a detailed analysis of the different attacks.
We present an overview of the most commonly proposed defenses and a discussion of the open problems and future directions identified during our analysis.
arXiv Detail & Related papers (2020-07-15T12:09:53Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.