ConAML: Constrained Adversarial Machine Learning for Cyber-Physical
Systems
- URL: http://arxiv.org/abs/2003.05631v3
- Date: Tue, 24 Nov 2020 21:23:09 GMT
- Title: ConAML: Constrained Adversarial Machine Learning for Cyber-Physical
Systems
- Authors: Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun, Kevin Tomsovic,
Hairong Qi
- Abstract summary: We study the potential vulnerabilities of machine learning applied in cyber-physical systems.
We propose Constrained Adversarial Machine Learning (ConAML) which generates adversarial examples that satisfy the intrinsic constraints of the physical systems.
- Score: 7.351477761427584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research demonstrated that the superficially well-trained machine
learning (ML) models are highly vulnerable to adversarial examples. As ML
techniques are becoming a popular solution for cyber-physical systems (CPSs)
applications in research literatures, the security of these applications is of
concern. However, current studies on adversarial machine learning (AML) mainly
focus on pure cyberspace domains. The risks the adversarial examples can bring
to the CPS applications have not been well investigated. In particular, due to
the distributed property of data sources and the inherent physical constraints
imposed by CPSs, the widely-used threat models and the state-of-the-art AML
algorithms in previous cyberspace research become infeasible.
We study the potential vulnerabilities of ML applied in CPSs by proposing
Constrained Adversarial Machine Learning (ConAML), which generates adversarial
examples that satisfy the intrinsic constraints of the physical systems. We
first summarize the difference between AML in CPSs and AML in existing
cyberspace systems and propose a general threat model for ConAML. We then
design a best-effort search algorithm to iteratively generate adversarial
examples with linear physical constraints. We evaluate our algorithms with
simulations of two typical CPSs, the power grids and the water treatment
system. The results show that our ConAML algorithms can effectively generate
adversarial examples which significantly decrease the performance of the ML
models even under practical constraints.
Related papers
- Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using
Cooperative Co-Evolutionary Search [9.511076358998073]
It is essential to identify the hazard boundary of ML Components (MLCs) in the Machine Learning-enabled autonomous systems under analysis.
We propose MLCSHE, a novel method based on a Cooperative Co-Evolutionary Algorithm (CCEA)
We evaluate the effectiveness and efficiency of MLCSHE on a complex Autonomous Vehicle (AV) case study.
arXiv Detail & Related papers (2023-01-31T17:50:52Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Threat Assessment in Machine Learning based Systems [12.031113181911627]
We conduct an empirical study of threats reported against Machine Learning-based systems.
The study is based on 89 real-world ML attack scenarios from the MITRE's ATLAS database, the AI Incident Database, and the literature.
Results show that convolutional neural networks were one of the most targeted models among the attack scenarios.
arXiv Detail & Related papers (2022-06-30T20:19:50Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.