Security Properties through the Lens of Modal Logic
- URL: http://arxiv.org/abs/2309.09542v1
- Date: Mon, 18 Sep 2023 07:37:12 GMT
- Title: Security Properties through the Lens of Modal Logic
- Authors: Matvey Soloviev, Musard Balliu, Roberto Guanciale,
- Abstract summary: We introduce a framework for reasoning about the security of computer systems using modal logic.
We show how to use our formalism to represent various variants of confidentiality, integrity, robust declassification and transparent endorsement.
- Score: 4.548429316641551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a framework for reasoning about the security of computer systems using modal logic. This framework is sufficiently expressive to capture a variety of known security properties, while also being intuitive and independent of syntactic details and enforcement mechanisms. We show how to use our formalism to represent various progress- and termination-(in)sensitive variants of confidentiality, integrity, robust declassification and transparent endorsement, and prove equivalence to standard definitions. The intuitive nature and closeness to semantic reality of our approach allows us to make explicit several hidden assumptions of these definitions, and identify potential issues and subtleties with them, while also holding the promise of formulating cleaner versions and future extension to entirely novel properties.
Related papers
- An Algebraic Approach to Asymmetric Delegation and Polymorphic Label Inference (Technical Report) [3.183855005494611]
Language-based information flow control (IFC) enables reasoning about and enforcing security policies in decentralized applications.
It can be difficult to use IFC labels to model certain security assumptions, such as semi-honest agents.
We propose a semantic framework that allows formalizing asymmetric delegation, which is partial delegation of confidentiality or integrity.
arXiv Detail & Related papers (2025-04-29T05:00:17Z) - Deliberative Alignment: Reasoning Enables Safer Language Models [64.60765108418062]
We introduce Deliberative Alignment, a new paradigm that teaches the model safety specifications and trains it to explicitly recall and accurately reason over the specifications before answering.
We used this approach to align OpenAI's o-series models, and achieved highly precise adherence to OpenAI's safety policies, without requiring human-written chain-of-thoughts or answers.
arXiv Detail & Related papers (2024-12-20T21:00:11Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts [11.883785681042593]
DePrompt is a desensitization protection and effectiveness evaluation framework for prompt.
We integrate contextual attributes to define privacy types, achieving high-precision PII entity identification.
Our framework is adaptable to prompts and can be extended to text usability-dependent scenarios.
arXiv Detail & Related papers (2024-08-16T02:38:25Z) - A Model-oriented Reasoning Framework for Privacy Analysis of Complex Systems [2.001711587270359]
This paper proposes a reasoning framework for privacy properties of systems and their environments.
It can capture any knowledge leaks on different logical levels to answer the question: which entity can learn what?
arXiv Detail & Related papers (2024-05-14T06:52:56Z) - On Cryptographic Mechanisms for the Selective Disclosure of Verifiable Credentials [39.4080639822574]
Verifiable credentials are a digital analogue of physical credentials.
They can be presented to verifiers to reveal attributes or even predicates about the attributes included in the credential.
One way to preserve privacy during presentation consists in selectively disclosing the attributes in a credential.
arXiv Detail & Related papers (2024-01-16T08:22:28Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Graded Modal Types for Integrity and Confidentiality [0.25782420501870285]
Graded type systems allow different properties of a program to be tracked via annotating types with additional information.
One example is information flow control, in which types are graded by a lattice of security levels allowing noninterference properties to be automatically verified and enforced.
Integrity, a property specifying that trusted outputs must not depend on untrusted inputs, has not been examined in this context.
We analogize the situation to recent work on embedding both linear uniqueness and types in a graded framework, and use this framing to demonstrate that we can enforce both integrity and confidentiality alongside one another.
arXiv Detail & Related papers (2023-09-08T13:40:52Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - (Un)reasonable Allure of Ante-hoc Interpretability for High-stakes
Domains: Transparency Is Necessary but Insufficient for Comprehensibility [25.542848590851758]
Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare.
It can refer to predictive models whose structure adheres to domain-specific constraints, or ones that are inherently transparent.
We unpack this concept to better understand what is needed for its safe adoption across high-stakes domains.
arXiv Detail & Related papers (2023-06-04T09:34:41Z) - A Style and Semantic Memory Mechanism for Domain Generalization [108.98041306507372]
Intra-domain style invariance is of pivotal importance in improving the efficiency of domain generalization.
We propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Our proposed framework surpasses the state-of-the-art methods by clear margins.
arXiv Detail & Related papers (2021-12-14T16:23:24Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.