Threat Assessment in Machine Learning based Systems
- URL: http://arxiv.org/abs/2207.00091v1
- Date: Thu, 30 Jun 2022 20:19:50 GMT
- Title: Threat Assessment in Machine Learning based Systems
- Authors: Lionel Nganyewou Tidjon and Foutse Khomh
- Abstract summary: We conduct an empirical study of threats reported against Machine Learning-based systems.
The study is based on 89 real-world ML attack scenarios from the MITRE's ATLAS database, the AI Incident Database, and the literature.
Results show that convolutional neural networks were one of the most targeted models among the attack scenarios.
- Score: 12.031113181911627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is a field of artificial intelligence (AI) that is becoming
essential for several critical systems, making it a good target for threat
actors. Threat actors exploit different Tactics, Techniques, and Procedures
(TTPs) against the confidentiality, integrity, and availability of Machine
Learning (ML) systems. During the ML cycle, they exploit adversarial TTPs to
poison data and fool ML-based systems. In recent years, multiple security
practices have been proposed for traditional systems but they are not enough to
cope with the nature of ML-based systems. In this paper, we conduct an
empirical study of threats reported against ML-based systems with the aim to
understand and characterize the nature of ML threats and identify common
mitigation strategies. The study is based on 89 real-world ML attack scenarios
from the MITRE's ATLAS database, the AI Incident Database, and the literature;
854 ML repositories from the GitHub search and the Python Packaging Advisory
database, selected based on their reputation. Attacks from the AI Incident
Database and the literature are used to identify vulnerabilities and new types
of threats that were not documented in ATLAS. Results show that convolutional
neural networks were one of the most targeted models among the attack
scenarios. ML repositories with the largest vulnerability prominence include
TensorFlow, OpenCV, and Notebook. In this paper, we also report the most
frequent vulnerabilities in the studied ML repositories, the most targeted ML
phases and models, the most used TTPs in ML phases and attack scenarios. This
information is particularly important for red/blue teams to better conduct
attacks/defenses, for practitioners to prevent threats during ML development,
and for researchers to develop efficient defense mechanisms.
Related papers
- A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends [78.3201480023907]
Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across a wide range of multimodal understanding and reasoning tasks.
The vulnerability of LVLMs is relatively underexplored, posing potential security risks in daily usage.
In this paper, we provide a comprehensive review of the various forms of existing LVLM attacks.
arXiv Detail & Related papers (2024-07-10T06:57:58Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Threats, Vulnerabilities, and Controls of Machine Learning Based
Systems: A Survey and Taxonomy [1.2043574473965317]
We first classify the damage caused by attacks against ML-based systems, define ML-specific security, and discuss its characteristics.
We enumerate all relevant assets and stakeholders and provide a general taxonomy for ML-specific threats.
Finally, we classify the vulnerabilities and controls of an ML-based system in terms of each vulnerable asset in the system's entire lifecycle.
arXiv Detail & Related papers (2023-01-18T12:32:51Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Automatic Mapping of Unstructured Cyber Threat Intelligence: An
Experimental Study [1.1470070927586016]
We present an experimental study on the automatic classification of unstructured Cyber Threat Intelligence (CTI) into attack techniques using machine learning (ML)
We contribute with two new datasets for CTI analysis, and we evaluate several ML models, including both traditional and deep learning-based ones.
We present several lessons learned about how ML can perform at this task, which classifiers perform best and under which conditions, which are the main causes of classification errors, and the challenges ahead for CTI analysis.
arXiv Detail & Related papers (2022-08-25T15:01:42Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.