Layered Security Guidance for Data Asset Management in Additive Manufacturing
- URL: http://arxiv.org/abs/2309.16842v1
- Date: Thu, 28 Sep 2023 20:48:40 GMT
- Title: Layered Security Guidance for Data Asset Management in Additive Manufacturing
- Authors: Fahad Ali Milaat, Joshua Lubell,
- Abstract summary: This paper proposes leveraging the National Institute of Standards and Technology's Cybersecurity Framework to develop layered, risk-based guidance for fulfilling specific security outcomes.
The authors believe implementation of the layered approach would result in value-added, non-redundant security guidance for AM that is consistent with the preexisting guidance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Manufacturing industries are increasingly adopting additive manufacturing (AM) technologies to produce functional parts in critical systems. However, the inherent complexity of both AM designs and AM processes render them attractive targets for cyber-attacks. Risk-based Information Technology (IT) and Operational Technology (OT) security guidance standards are useful resources for AM security practitioners, but the guidelines they provide are insufficient without additional AM-specific revisions. Therefore, a structured layering approach is needed to efficiently integrate these revisions with preexisting IT and OT security guidance standards. To implement such an approach, this paper proposes leveraging the National Institute of Standards and Technology's Cybersecurity Framework (CSF) to develop layered, risk-based guidance for fulfilling specific security outcomes. It begins with an in-depth literature review that reveals the importance of AM data and asset management to risk-based security. Next, this paper adopts the CSF asset identification and management security outcomes as an example for providing AM-specific guidance and identifies the AM geometry and process definitions to aid manufacturers in mapping data flows and documenting processes. Finally, this paper uses the Open Security Controls Assessment Language to integrate the AM-specific guidance together with existing IT and OT security guidance in a rigorous and traceable manner. This paper's contribution is to show how a risk-based layered approach enables the authoring, publishing, and management of AM-specific security guidance that is currently lacking. The authors believe implementation of the layered approach would result in value-added, non-redundant security guidance for AM that is consistent with the preexisting guidance.
Related papers
- Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems [2.3266896180922187]
We compile an extensive catalog of risk sources and risk management measures for general-purpose AI systems.
This work involves identifying technical, operational, and societal risks across model development, training, and deployment stages.
The catalog is released under a public domain license for ease of direct use by stakeholders in AI governance and standards.
arXiv Detail & Related papers (2024-10-30T21:32:56Z) - AssessITS: Integrating procedural guidelines and practical evaluation metrics for organizational IT and Cybersecurity risk assessment [0.0]
'AssessITS' aims to enable organizations to enhance their IT security strength actionable based on internationally recognized standards.
'AssessITS' aims to enable organizations to enhance their IT security strength actionable based on internationally recognized standards.
arXiv Detail & Related papers (2024-10-02T17:01:59Z) - Adapting cybersecurity frameworks to manage frontier AI risks: A defense-in-depth approach [0.0]
We outline three approaches that can help identify gaps in the management of AI-related risks.
First, a functional approach identifies essential categories of activities that a risk management approach should cover.
Second, a lifecycle approach assigns safety and security activities across the model development lifecycle.
Third, a threat-based approach identifies tactics, techniques, and procedures used by malicious actors.
arXiv Detail & Related papers (2024-08-15T05:06:03Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - TrustAgent: Towards Safe and Trustworthy LLM-based Agents [50.33549510615024]
This paper presents an Agent-Constitution-based agent framework, TrustAgent, with a focus on improving the LLM-based agent safety.
The proposed framework ensures strict adherence to the Agent Constitution through three strategic components: pre-planning strategy which injects safety knowledge to the model before plan generation, in-planning strategy which enhances safety during plan generation, and post-planning strategy which ensures safety by post-planning inspection.
arXiv Detail & Related papers (2024-02-02T17:26:23Z) - A Model Based Framework for Testing Safety and Security in Operational
Technology Environments [0.46040036610482665]
We propose a model-based testing approach which we consider a promising way to analyze the safety and security behavior of a system under test.
The structure of the underlying framework is divided into four parts, according to the critical factors in testing of operational technology environments.
arXiv Detail & Related papers (2023-06-22T05:37:09Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Risk Management Framework for Machine Learning Security [7.678455181587705]
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry.
In this paper, we outline a novel framework to guide the risk management process for organizations reliant on machine learning models.
arXiv Detail & Related papers (2020-12-09T06:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.