Assume but Verify: Deductive Verification of Leaked Information in Concurrent Applications (Extended Version)
- URL: http://arxiv.org/abs/2309.03442v1
- Date: Thu, 7 Sep 2023 01:51:41 GMT
- Title: Assume but Verify: Deductive Verification of Leaked Information in Concurrent Applications (Extended Version)
- Authors: Toby Murray, Mukesh Tiwari, Gidon Ernst, David A. Naumann,
- Abstract summary: We consider the problem of specifying and proving the security of non-trivial, concurrent programs that intentionally leak information.
We present a method that decomposes the problem into (a) proving that the program only leaks information via assume annotations already widely used in deductive program verification.
We also show how condition (a) can be enforced by an extension of the existing program logic SecCSL, and how (b) can be checked by proving a set of simple entailments.
- Score: 1.3749490831384268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of specifying and proving the security of non-trivial, concurrent programs that intentionally leak information. We present a method that decomposes the problem into (a) proving that the program only leaks information it has declassified via assume annotations already widely used in deductive program verification; and (b) auditing the declassifications against a declarative security policy. We show how condition (a) can be enforced by an extension of the existing program logic SecCSL, and how (b) can be checked by proving a set of simple entailments. Part of the challenge is to define respective semantic soundness criteria and to formally connect these to the logic rules and policy audit. We support our methodology in an auto-active program verifier, which we apply to verify the implementations of various case study programs against a range of declassification policies.
Related papers
- Learning Verifiable Control Policies Using Relaxed Verification [49.81690518952909]
This work proposes to perform verification throughout training to aim for policies whose properties can be evaluated throughout runtime.
The approach is to use differentiable reachability analysis and incorporate new components into the loss function.
arXiv Detail & Related papers (2025-04-23T16:54:35Z) - Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.
We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.
We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Handling expression evaluation under interference [0.0]
Hoare-style inference rules for program constructs permit the copying of expressions and tests from program text into logical contexts.
The "rely-guarantee" approach does tackle the issue of recording acceptable interference and offers a way to provide safe inference rules.
arXiv Detail & Related papers (2024-09-12T04:16:22Z) - Formally Verified Approximate Policy Iteration [11.602089225841633]
We show how the formalized algorithm can be refined to an executable, verified implementation.
The implementation is evaluated on benchmark problems to show its practicability.
As part of the refinement, we develop verified software to certify Linear Programming solutions.
arXiv Detail & Related papers (2024-06-11T15:07:08Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Automatic Program Instrumentation for Automatic Verification (Extended
Technical Report) [0.0]
In deductive verification and software model checking, dealing with certain specification language constructs can be problematic.
We propose instrumentation as a unifying verification paradigm that subsumes various existing ad-hoc approaches.
We implement our approach in the MonoCera tool, which is tailored to the verification of programs with aggregation.
arXiv Detail & Related papers (2023-05-26T14:55:35Z) - Fact-Checking Complex Claims with Program-Guided Reasoning [99.7212240712869]
Program-Guided Fact-Checking (ProgramFC) is a novel fact-checking model that decomposes complex claims into simpler sub-tasks.
We first leverage the in-context learning ability of large language models to generate reasoning programs.
We execute the program by delegating each sub-task to the corresponding sub-task handler.
arXiv Detail & Related papers (2023-05-22T06:11:15Z) - Conformance Checking with Uncertainty via SMT (Extended Version) [66.58864135810981]
We show how to solve the problem of checking conformance of uncertain logs against data-aware reference processes.
Our approach is modular, in that it homogeneously accommodates for different types of uncertainty.
We show the correctness of our approach and witness feasibility through a proof-of-concept implementation.
arXiv Detail & Related papers (2022-06-15T11:39:45Z) - Stateless and Rule-Based Verification For Compliance Checking
Applications [1.7403133838762452]
We present a formal logic-based framework for creating intelligent compliance checking systems.
SARV is a verification framework designed to simplify the overall process of verification for stateless and rule-based verification problems.
Based on 300 data experiments, the SARV-based compliance solution outperforms machine learning methods on a 3125-records software quality dataset.
arXiv Detail & Related papers (2022-04-14T17:31:33Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - An Abstraction-based Method to Verify Multi-Agent Deep
Reinforcement-Learning Behaviours [8.95294551927446]
Multi-agent reinforcement learning (RL) often struggles to ensure the safe behaviours of the learning agents.
We present a methodology that combines formal verification with (deep) RL algorithms to guarantee the satisfaction of formally-specified safety constraints.
arXiv Detail & Related papers (2021-02-02T11:12:30Z) - Verification of indefinite-horizon POMDPs [63.6726420864286]
This paper considers the verification problem for partially observable MDPs.
We present an abstraction-refinement framework extending previous instantiations of the Lovejoy-approach.
arXiv Detail & Related papers (2020-06-30T21:01:52Z) - Hierarchical Variational Imitation Learning of Control Programs [131.7671843857375]
We propose a variational inference method for imitation learning of a control policy represented by parametrized hierarchical procedures (PHP)
Our method discovers the hierarchical structure in a dataset of observation-action traces of teacher demonstrations, by learning an approximate posterior distribution over the latent sequence of procedure calls and terminations.
We demonstrate a novel benefit of variational inference in the context of hierarchical imitation learning: in decomposing the policy into simpler procedures, inference can leverage acausal information that is unused by other methods.
arXiv Detail & Related papers (2019-12-29T08:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.