Towards Safety and Security Testing of Cyberphysical Power Systems by Shape Validation
- URL: http://arxiv.org/abs/2506.12466v1
- Date: Sat, 14 Jun 2025 12:07:44 GMT
- Title: Towards Safety and Security Testing of Cyberphysical Power Systems by Shape Validation
- Authors: Alexander Geiger, Immanuel Hacker, Ă–mer Sen, Andreas Ulbig,
- Abstract summary: complexity of cyberphysical power systems leads to larger attack surfaces to be exploited by malicious actors.<n>We propose to meet those risks with a declarative approach to describe cyber power systems and automatically evaluate security and safety controls.
- Score: 42.350737545269105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing complexity of cyberphysical power systems leads to larger attack surfaces to be exploited by malicious actors and a higher risk of faults through misconfiguration. We propose to meet those risks with a declarative approach to describe cyberphysical power systems and to automatically evaluate security and safety controls. We leverage Semantic Web technologies as a well-standardized framework, providing languages to specify ontologies, rules and shape constraints. We model infrastructure through an ontology which combines external ontologies, architecture and data models for sufficient expressivity and interoperability with external systems. The ontology can enrich itself through rules defined in SPARQL, allowing for the inference of knowledge that is not explicitly stated. Through the evaluation of SHACL shape constraints we can then validate the data and verify safety and security constraints. We demonstrate this concept with two use cases and illustrate how this solution can be developed further in a community-driven fashion.
Related papers
- Towards provable probabilistic safety for scalable embodied AI systems [79.31011047593492]
Embodied AI systems are increasingly prevalent across various applications.<n> Ensuring their safety in complex operating environments remains a major challenge.<n>This Perspective offers a pathway toward safer, large-scale adoption of embodied AI systems in safety-critical applications.
arXiv Detail & Related papers (2025-06-05T15:46:25Z) - Systematic Hazard Analysis for Frontier AI using STPA [0.0]
frontier AI companies currently do not describe in detail any structured approach to identifying and analysing hazards.<n>A (Systems-Theoretic Process Analysis) is a systematic methodology for identifying how complex systems can become unsafe, leading to hazards.<n>We evaluateA's ability to broaden the scope, improve traceability and strengthen the robustness of safety assurance for frontier AI systems.
arXiv Detail & Related papers (2025-06-02T15:28:34Z) - Modeling Interdependent Cybersecurity Threats Using Bayesian Networks: A Case Study on In-Vehicle Infotainment Systems [0.0]
This paper reviews the application of Bayesian Networks (BNs) in cybersecurity risk modeling.<n>A case study is presented in which a STRIDE-based attack tree for an automotive In-Vehicle Infotainment (IVI) system is transformed into a BN.
arXiv Detail & Related papers (2025-05-14T01:04:45Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.<n>We focus on technical approaches to misuse and misalignment.<n>We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification [0.0]
This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity.<n>We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps.<n>The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems.
arXiv Detail & Related papers (2025-03-27T11:41:53Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Simulation of Multi-Stage Attack and Defense Mechanisms in Smart Grids [2.0766068042442174]
We introduce a simulation environment that replicates the power grid's infrastructure and communication dynamics.<n>The framework generates diverse, realistic attack data to train machine learning algorithms for detecting and mitigating cyber threats.<n>It also provides a controlled, flexible platform to evaluate emerging security technologies, including advanced decision support systems.
arXiv Detail & Related papers (2024-12-09T07:07:17Z) - ACRIC: Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
Recent security incidents in safety-critical industries exposed how the lack of proper message authentication enables attackers to inject malicious commands or alter system behavior.<n>These shortcomings have prompted new regulations that emphasize the pressing need to strengthen cybersecurity.<n>We introduce ACRIC, a message authentication solution to secure legacy industrial communications.
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - A Model Based Framework for Testing Safety and Security in Operational
Technology Environments [0.46040036610482665]
We propose a model-based testing approach which we consider a promising way to analyze the safety and security behavior of a system under test.
The structure of the underlying framework is divided into four parts, according to the critical factors in testing of operational technology environments.
arXiv Detail & Related papers (2023-06-22T05:37:09Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.