Beyond Algorithmic Proofs: Towards Implementation-Level Provable Security
- URL: http://arxiv.org/abs/2508.01144v1
- Date: Sat, 02 Aug 2025 01:58:06 GMT
- Title: Beyond Algorithmic Proofs: Towards Implementation-Level Provable Security
- Authors: Jiahui Shang, Luning Zhang, Zhongxiang Zheng,
- Abstract summary: We present Implementation-Level Provable Security, a new paradigm that defines security in terms of structurally verifiable resilience against real-world attack surfaces during deployment.<n>We present SEER (Secure and Efficient Encryption-based Erasure via Ransomware), a file destruction system that repurposes and reinforces the encryption core of Babuk ransomware.
- Score: 1.338174941551702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While traditional cryptographic research focuses on algorithm-level provable security, many real-world attacks exploit weaknesses in system implementations, such as memory mismanagement, poor entropy sources, and insecure key lifecycles. Existing approaches address these risks in isolation but lack a unified, verifiable framework for modeling implementation-layer security. In this work, we propose Implementation-Level Provable Security, a new paradigm that defines security in terms of structurally verifiable resilience against real-world attack surfaces during deployment. To demonstrate its feasibility, we present SEER (Secure and Efficient Encryption-based Erasure via Ransomware), a file destruction system that repurposes and reinforces the encryption core of Babuk ransomware. SEER incorporates key erasure, entropy validation, and execution consistency checks to ensure a well-constrained, auditable attack surface. Our evaluation shows that SEER achieves strong irrecoverability guarantees while maintaining practical performance. This work demonstrates a shift from abstract theoretical models toward practically verifiable implementation-layer security.
Related papers
- Provably Secure Retrieval-Augmented Generation [7.412110686946628]
This paper proposes the first provably secure framework for Retrieval-Augmented Generation (RAG) systems.<n>Our framework employs a pre-storage full-encryption scheme to ensure dual protection of both retrieved content and vector embeddings.
arXiv Detail & Related papers (2025-08-01T21:37:16Z) - Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation [52.626086874715284]
We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
arXiv Detail & Related papers (2025-05-08T13:29:46Z) - Cybersecurity through Entropy Injection: A Paradigm Shift from Reactive Defense to Proactive Uncertainty [0.0]
entropy injection is deliberately infusing randomness into security mechanisms to increase unpredictability and enhance system security.<n>We show that entropy injection can significantly reduce attack probability, with some implementations showing more than 90% reduction with minimal performance impact.<n>We conclude that entropy injection represents a paradigm shift from reactive defense to proactive uncertainty management.
arXiv Detail & Related papers (2025-04-15T23:08:57Z) - Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics [5.384257830522198]
Large Language Models (LLMs) in critical applications have introduced severe reliability and security risks.<n>These vulnerabilities have been weaponized by malicious actors, leading to unauthorized access, widespread misinformation, and compromised system integrity.<n>We introduce a novel approach to detecting abnormal behaviors in LLMs via hidden state forensics.
arXiv Detail & Related papers (2025-04-01T05:58:14Z) - Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks [2.28438857884398]
Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges.
This study proposes an innovative security framework inspired by Control-Flow (CFA) mechanisms, traditionally used in cybersecurity.
We authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference.
arXiv Detail & Related papers (2024-03-15T04:03:34Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification [22.078088272837068]
Federated Learning (FL) systems are vulnerable to adversarial attacks, such as model poisoning and backdoor attacks.<n>We propose a novel anomaly detection method designed specifically for practical FL scenarios.<n>Our approach employs a two-stage, conditionally activated detection mechanism.
arXiv Detail & Related papers (2023-10-06T07:09:05Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.