An Ontology-Based Approach to Security Risk Identification of Container Deployments in OT Contexts
- URL: http://arxiv.org/abs/2601.04010v1
- Date: Wed, 07 Jan 2026 15:20:19 GMT
- Title: An Ontology-Based Approach to Security Risk Identification of Container Deployments in OT Contexts
- Authors: Yannick Landeck, Dian Balta, Martin Wimmer, Christian Knierim,
- Abstract summary: Security risk identification for OT container deployments is challenged by hybrid IT/OT architectures, fragmented stakeholder knowledge, and continuous system changes.<n>We propose a model-based approach, implemented as the Container Security Risk Ontology (CSRO)<n>CSRO integrates five key domains: adversarial behaviour, contextual assumptions, attack scenarios, risk assessment rules, and container security artefacts.
- Score: 1.826848871278733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In operational technology (OT) contexts, containerised applications often require elevated privileges to access low-level network interfaces or perform administrative tasks such as application monitoring. These privileges reduce the default isolation provided by containers and introduce significant security risks. Security risk identification for OT container deployments is challenged by hybrid IT/OT architectures, fragmented stakeholder knowledge, and continuous system changes. Existing approaches lack reproducibility, interpretability across contexts, and technical integration with deployment artefacts. We propose a model-based approach, implemented as the Container Security Risk Ontology (CSRO), which integrates five key domains: adversarial behaviour, contextual assumptions, attack scenarios, risk assessment rules, and container security artefacts. Our evaluation of CSRO in a case study demonstrates that the end-to-end formalisation of risk calculation, from artefact to risk level, enables automated and reproducible risk identification. While CSRO currently focuses on technical, container-level treatment measures, its modular and flexible design provides a solid foundation for extending the approach to host-level and organisational risk factors.
Related papers
- The Necessity of a Holistic Safety Evaluation Framework for AI-Based Automation Features [0.0]
Safety of Intended Functionality (SOTIF) and Functional Safety (FuSa) analysis of driving automation features has traditionally excluded Quality Management (QM) components from rigorous safety impact evaluations.<n>Recent developments in artificial intelligence (AI) integration reveal that such components can contribute to SOTIF-related hazardous risks.<n>This paper argues for the adoption of comprehensive FuSa, SOTIF, and AI standards-driven methodologies to identify and mitigate risks in AI components.
arXiv Detail & Related papers (2026-02-05T00:22:24Z) - Risky-Bench: Probing Agentic Safety Risks under Real-World Deployment [64.36422334429228]
Large Language Models (LLMs) are increasingly deployed as agents that operate in real-world environments.<n>Existing agent safety evaluations rely on risk-oriented tasks tailored to specific agent settings.<n>We propose Risky-Bench, a framework that enables systematic agent safety evaluation grounded in real-world deployment.
arXiv Detail & Related papers (2026-02-03T04:44:11Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem [0.0]
The Model Context Protocol (MCP) has emerged as the de facto standard for connecting Large Language Models to external data and tools.<n>This paper provides a taxonomy of risks in the MCP ecosystem, distinguishing between adversarial security threats and safety hazards.<n>We demonstrate how "context" can be weaponized to trigger unauthorized operations in multi-agent environments.
arXiv Detail & Related papers (2025-12-09T06:39:21Z) - AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions [64.85086226439954]
We present SAFE, a benchmark for assessing the safety of embodied VLM agents on hazardous instructions.<n> SAFE comprises three components: SAFE-THOR, SAFE-VERSE, and SAFE-DIAGNOSE.<n>We uncover systematic failures in translating hazard recognition into safe planning and execution.
arXiv Detail & Related papers (2025-06-17T16:37:35Z) - Towards Safety and Security Testing of Cyberphysical Power Systems by Shape Validation [42.350737545269105]
complexity of cyberphysical power systems leads to larger attack surfaces to be exploited by malicious actors.<n>We propose to meet those risks with a declarative approach to describe cyber power systems and automatically evaluate security and safety controls.
arXiv Detail & Related papers (2025-06-14T12:07:44Z) - Systematic Hazard Analysis for Frontier AI using STPA [0.0]
frontier AI companies currently do not describe in detail any structured approach to identifying and analysing hazards.<n>A (Systems-Theoretic Process Analysis) is a systematic methodology for identifying how complex systems can become unsafe, leading to hazards.<n>We evaluateA's ability to broaden the scope, improve traceability and strengthen the robustness of safety assurance for frontier AI systems.
arXiv Detail & Related papers (2025-06-02T15:28:34Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Layered Security Guidance for Data Asset Management in Additive Manufacturing [0.0]
This paper proposes leveraging the National Institute of Standards and Technology's Cybersecurity Framework to develop layered, risk-based guidance for fulfilling specific security outcomes.
The authors believe implementation of the layered approach would result in value-added, non-redundant security guidance for AM that is consistent with the preexisting guidance.
arXiv Detail & Related papers (2023-09-28T20:48:40Z) - Risk-Aware Fine-Grained Access Control in Cyber-Physical Contexts [12.138525287184061]
RASA is a context-sensitive access authorization approach and mechanism leveraging unsupervised machine learning to automatically infer risk-based authorization decision boundaries.
We explore RASA in a healthcare usage environment, wherein cyber and physical conditions create context-specific risks for protecting private health information.
arXiv Detail & Related papers (2021-08-29T03:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.