Threats, Vulnerabilities, and Controls of Machine Learning Based
Systems: A Survey and Taxonomy
- URL: http://arxiv.org/abs/2301.07474v2
- Date: Thu, 19 Jan 2023 02:43:05 GMT
- Title: Threats, Vulnerabilities, and Controls of Machine Learning Based
Systems: A Survey and Taxonomy
- Authors: Yusuke Kawamoto and Kazumasa Miyake and Koichi Konishi and Yutaka Oiwa
- Abstract summary: We first classify the damage caused by attacks against ML-based systems, define ML-specific security, and discuss its characteristics.
We enumerate all relevant assets and stakeholders and provide a general taxonomy for ML-specific threats.
Finally, we classify the vulnerabilities and controls of an ML-based system in terms of each vulnerable asset in the system's entire lifecycle.
- Score: 1.2043574473965317
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this article, we propose the Artificial Intelligence Security Taxonomy to
systematize the knowledge of threats, vulnerabilities, and security controls of
machine-learning-based (ML-based) systems. We first classify the damage caused
by attacks against ML-based systems, define ML-specific security, and discuss
its characteristics. Next, we enumerate all relevant assets and stakeholders
and provide a general taxonomy for ML-specific threats. Then, we collect a wide
range of security controls against ML-specific threats through an extensive
review of recent literature. Finally, we classify the vulnerabilities and
controls of an ML-based system in terms of each vulnerable asset in the
system's entire lifecycle.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - A New Era in LLM Security: Exploring Security Concerns in Real-World
LLM-based Systems [47.18371401090435]
We analyze the security of Large Language Model (LLM) systems, instead of focusing on the individual LLMs.
We propose a multi-layer and multi-step approach and apply it to the state-of-art OpenAI GPT4.
We found that although the OpenAI GPT4 has designed numerous safety constraints to improve its safety features, these safety constraints are still vulnerable to attackers.
arXiv Detail & Related papers (2024-02-28T19:00:12Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Identifying Appropriate Intellectual Property Protection Mechanisms for
Machine Learning Models: A Systematization of Watermarking, Fingerprinting,
Model Access, and Attacks [0.1031296820074812]
The commercial use of Machine Learning (ML) is spreading; at the same time, ML models are becoming more complex and more expensive to train.
In this paper, we systematize our findings on IPP in ML, while focusing on threats and attacks identified and defenses proposed at the time of writing.
We develop a comprehensive threat model for IP in ML, categorizing attacks and defenses within a unified and consolidated taxonomy.
arXiv Detail & Related papers (2023-04-22T01:05:48Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Threat Assessment in Machine Learning based Systems [12.031113181911627]
We conduct an empirical study of threats reported against Machine Learning-based systems.
The study is based on 89 real-world ML attack scenarios from the MITRE's ATLAS database, the AI Incident Database, and the literature.
Results show that convolutional neural networks were one of the most targeted models among the attack scenarios.
arXiv Detail & Related papers (2022-06-30T20:19:50Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - A Framework for Evaluating the Cybersecurity Risk of Real World, Machine
Learning Production Systems [41.470634460215564]
We develop an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems.
Using the proposed extension, security practitioners can apply attack graph analysis methods in environments that include ML components.
arXiv Detail & Related papers (2021-07-05T05:58:11Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - A System for Efficiently Hunting for Cyber Threats in Computer Systems
Using Threat Intelligence [78.23170229258162]
We build ThreatRaptor, a system that facilitates cyber threat hunting in computer systems using OSCTI.
ThreatRaptor provides (1) an unsupervised, light-weight, and accurate NLP pipeline that extracts structured threat behaviors from unstructured OSCTI text, (2) a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities, and (3) a query synthesis mechanism that automatically synthesizes a TBQL query from the extracted threat behaviors.
arXiv Detail & Related papers (2021-01-17T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.