Generalized Security-Preserving Refinement for Concurrent Systems
- URL: http://arxiv.org/abs/2511.06862v1
- Date: Mon, 10 Nov 2025 09:06:01 GMT
- Title: Generalized Security-Preserving Refinement for Concurrent Systems
- Authors: Huan Sun, David SanĂ¡n, Jingyi Wang, Yongwang Zhao, Jun Sun, Wenhai Wang,
- Abstract summary: We present a refinement-based compositional approach to prove that IFS properties are preserved between implementation and abstraction.<n>We apply our approach to verify two non-preserving case studies against a collection of security policies.
- Score: 36.14488865240512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring compliance with Information Flow Security (IFS) is known to be challenging, especially for concurrent systems with large codebases such as multicore operating system (OS) kernels. Refinement, which verifies that an implementation preserves certain properties of a more abstract specification, is promising for tackling such challenges. However, in terms of refinement-based verification of security properties, existing techniques are still restricted to sequential systems or lack the expressiveness needed to capture complex security policies for concurrent systems. In this work, we present a generalized security-preserving refinement technique, particularly for verifying the IFS of concurrent systems governed by potentially complex security policies. We formalize the IFS properties for concurrent systems and present a refinement-based compositional approach to prove that the generalized security properties (e.g., intransitive noninterference) are preserved between implementation and abstraction. The key intuition enabling such reasoning, compared to previous refinement work, is to establish a step-mapping relation between the implementation and the abstraction, which is sufficient to ensure that every paired step (in the abstraction and the implementation, respectively) is either permitted or prohibited by the security policy. We apply our approach to verify two non-trivial case studies against a collection of security policies. Our proofs are fully mechanized in Isabelle/HOL, during which we identified that two covert channels previously reported in the ARINC 653 single-core standard also exist in the ARINC 653 multicore standard. We subsequently proved the correctness of the revised mechanism, showcasing the effectiveness of our approach.
Related papers
- Kernel-Based Learning of Safety Barriers [0.9367224590861915]
Rapid integration of AI algorithms in safety-critical applications is raising concerns about the ability to meet stringent safety standards.<n>Traditional tools for formal safety verification struggle with the black-box nature of AI-driven systems.<n>We present a data-driven approach for safety verification and synthesis of black-box systems with discrete-time dynamics.
arXiv Detail & Related papers (2026-01-17T10:42:35Z) - Secure Data Bridging in Industry 4.0: An OPC UA Aggregation Approach for Including Insecure Legacy Systems [0.4675863182439614]
The increased connectivity of industrial networks has led to a surge in cyberattacks.<n>Modern Industry 4.0 technologies, such as OPC UA, offer enhanced resilience against these threats.<n>Many systems do not yet implement these technologies, or only partially.<n>This paper reviews existing solutions to address this challenge by analysing their approaches, advantages, and limitations.
arXiv Detail & Related papers (2026-01-16T01:18:31Z) - A Dynamical Systems Framework for Reinforcement Learning Safety and Robustness Verification [1.104960878651584]
This paper introduces a novel framework that addresses the lack of formal methods for verifying the robustness and safety of learned policies.<n>By leveraging tools from dynamical systems theory, we identify and visualize Lagrangian Coherent Structures (LCS) that act as the hidden "skeleton" governing the system's behavior.<n>We show that this framework provides a comprehensive and interpretable assessment of policy behavior, successfully identifying critical flaws in policies that appear successful based on reward alone.
arXiv Detail & Related papers (2025-08-21T14:00:26Z) - Beyond Algorithmic Proofs: Towards Implementation-Level Provable Security [1.338174941551702]
We present Implementation-Level Provable Security, a new paradigm that defines security in terms of structurally verifiable resilience against real-world attack surfaces during deployment.<n>We present SEER (Secure and Efficient Encryption-based Erasure via Ransomware), a file destruction system that repurposes and reinforces the encryption core of Babuk ransomware.
arXiv Detail & Related papers (2025-08-02T01:58:06Z) - Provably Secure Retrieval-Augmented Generation [7.412110686946628]
This paper proposes the first provably secure framework for Retrieval-Augmented Generation (RAG) systems.<n>Our framework employs a pre-storage full-encryption scheme to ensure dual protection of both retrieved content and vector embeddings.
arXiv Detail & Related papers (2025-08-01T21:37:16Z) - The Alignment Trap: Complexity Barriers [0.0]
This paper argues that AI alignment is not merely difficult, but is founded on a fundamental logical contradiction.<n>We first establish Theion Paradox: we use machine learning precisely because we cannot enumerate all necessary safety rules.<n>This paradox is then confirmed by a set of five independent mathematical proofs.
arXiv Detail & Related papers (2025-06-12T02:30:30Z) - Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation [52.626086874715284]
We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
arXiv Detail & Related papers (2025-05-08T13:29:46Z) - ACRIC: Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
Recent security incidents in safety-critical industries exposed how the lack of proper message authentication enables attackers to inject malicious commands or alter system behavior.<n>These shortcomings have prompted new regulations that emphasize the pressing need to strengthen cybersecurity.<n>We introduce ACRIC, a message authentication solution to secure legacy industrial communications.
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - Safe Inputs but Unsafe Output: Benchmarking Cross-modality Safety Alignment of Large Vision-Language Model [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.<n>To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.<n>Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Data-Driven Distributionally Robust Safety Verification Using Barrier Certificates and Conditional Mean Embeddings [0.24578723416255752]
We develop scalable formal verification algorithms without shifting the problem to unrealistic assumptions.
In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates.
We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope.
arXiv Detail & Related papers (2024-03-15T17:32:02Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.