Checking and Automating Confidentiality Theory in Isabelle/UTP
- URL: http://arxiv.org/abs/2310.10658v1
- Date: Thu, 7 Sep 2023 23:53:33 GMT
- Title: Checking and Automating Confidentiality Theory in Isabelle/UTP
- Authors: Lex Bailey, Jim Woodcock, Simon Foster, Roberto Metere,
- Abstract summary: We argue that confidentiality should be promoted as a normal part of program verification.
We show how our mechanisation can be used to for-mally verify some of the examples from Bank's work.
- Score: 1.1849561189229347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The severity of recent vulnerabilities discovered on modern CPUs, e.g., Spectre [1], highlights how information leakage can have devas-tating effects to the security of computer systems. At the same time, it suggests that confidentiality should be promoted as a normal part of program verification, to discover and mitigate such vulnerabili-ties early in development. The theory we propose is primarily based on Bank's theory [2], a framework for reasoning about confidentiali-ty properties formalised in the Unifying Theories of Programming (UTP) [3]. We mechanised our encoding in the current implementa-tion of UTP in the Isabelle theorem prover, Isabelle/UTP [4]. We have identified some theoretical issues in Bank's original framework. Finally, we demonstrate how our mechanisation can be used to for-mally verify of some of the examples from Bank's work.
Related papers
- A SAT-based approach to rigorous verification of Bayesian networks [13.489622701621698]
We introduce a verification framework tailored for Bayesian networks, designed to address these drawbacks.
Our framework comprises two key components: (1) a two-step compilation and encoding scheme that translates Bayesian networks into Boolean logic literals, and (2) formal verification queries that leverage these literals to verify various properties encoded as constraints.
We benchmark the efficiency of our verification scheme and demonstrate its practical utility in real-world scenarios.
arXiv Detail & Related papers (2024-08-02T03:06:51Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Asynchronous Authentication [3.038642416291856]
Digital asset heists and identity theft cases illustrate the urgent need to revisit the fundamentals of user authentication.
We formalize the general, common case of asynchronous authentication, with unbounded message propagation time.
Our model allows for eventual message delivery, while bounding execution time to maintain cryptographic guarantees.
arXiv Detail & Related papers (2023-12-21T15:53:54Z) - Penetrating Shields: A Systematic Analysis of Memory Corruption Mitigations in the Spectre Era [6.212464457097657]
We study speculative shield bypass attacks that leverage speculative execution attacks to leak secrets that are critical to the security of memory corruption mitigations.
We present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals.
arXiv Detail & Related papers (2023-09-08T04:43:33Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Quantum Proofs of Deletion for Learning with Errors [91.3755431537592]
We construct the first fully homomorphic encryption scheme with certified deletion.
Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors distribution in the form of a quantum state was deleted.
arXiv Detail & Related papers (2022-03-03T10:07:32Z) - Experimental quantum key distribution certified by Bell's theorem [0.0]
cryptographic key exchange protocols traditionally rely on computational conjectures to provide security against eavesdropping attacks.
quantum key distribution protocols provide information-theoretic security against such attacks.
However, quantum protocols are subject to a new class of attacks exploiting implementation defects in the physical devices involved.
We present here the experimental realisation of a complete quantum key distribution protocol immune to these vulnerabilities.
arXiv Detail & Related papers (2021-09-29T17:52:48Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z) - Towards Probabilistic Verification of Machine Unlearning [30.892906429582904]
We propose a formal framework to study the design of verification mechanisms for data deletion requests.
We show that our approach has minimal effect on the machine learning service's accuracy but provides high confidence verification of unlearning.
arXiv Detail & Related papers (2020-03-09T16:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.