Using Constraint Programming and Graph Representation Learning for
Generating Interpretable Cloud Security Policies
- URL: http://arxiv.org/abs/2205.01240v1
- Date: Mon, 2 May 2022 22:15:07 GMT
- Title: Using Constraint Programming and Graph Representation Learning for
Generating Interpretable Cloud Security Policies
- Authors: Mikhail Kazdagli, Mohit Tiwari, Akshat Kumar
- Abstract summary: Cloud security relies on Identity Access Management (IAM) policies that IT admins need to properly configure and periodically update.
We develop a novel framework that encodes generating optimal IAM policies using constraint programming (CP)
We show that our optimized IAM policies significantly reduce the impact of security attacks using real data from 8 commercial organizations, and synthetic instances.
- Score: 12.43505973436359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern software systems rely on mining insights from business sensitive data
stored in public clouds. A data breach usually incurs significant (monetary)
loss for a commercial organization. Conceptually, cloud security heavily relies
on Identity Access Management (IAM) policies that IT admins need to properly
configure and periodically update. Security negligence and human errors often
lead to misconfiguring IAM policies which may open a backdoor for attackers. To
address these challenges, first, we develop a novel framework that encodes
generating optimal IAM policies using constraint programming (CP). We identify
reducing dark permissions of cloud users as an optimality criterion, which
intuitively implies minimizing unnecessary datastore access permissions.
Second, to make IAM policies interpretable, we use graph representation
learning applied to historical access patterns of users to augment our CP model
with similarity constraints: similar users should be grouped together and share
common IAM policies. Third, we describe multiple attack models and show that
our optimized IAM policies significantly reduce the impact of security attacks
using real data from 8 commercial organizations, and synthetic instances.
Related papers
- PolicyLR: A Logic Representation For Privacy Policies [34.73520882451813]
We propose PolicyLR, a new paradigm that offers a comprehensive machine-readable representation of privacy policies.
PolicyLR converts privacy policies into a machine-readable format using valuations of atomic formulae.
We demonstrate PolicyLR in three privacy tasks: Policy Compliance, Inconsistency Detection and Privacy Comparison Shopping.
arXiv Detail & Related papers (2024-08-27T07:27:16Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Leveraging AI Planning For Detecting Cloud Security Vulnerabilities [15.503757553097387]
Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration.
Access control misconfigurations are often the primary driver for cloud attacks.
We develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware.
arXiv Detail & Related papers (2024-02-16T03:28:02Z) - From Principle to Practice: Vertical Data Minimization for Machine
Learning [15.880586296169687]
Policymakers increasingly demand compliance with the data minimization (DM) principle.
Despite regulatory pressure, the problem of deploying machine learning models that obey DM has so far received little attention.
We propose a novel vertical DM (vDM) workflow based on data generalization.
arXiv Detail & Related papers (2023-11-17T13:01:09Z) - Exploring Security Practices in Infrastructure as Code: An Empirical
Study [54.669404064111795]
Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools.
scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks.
Ensuring security relies on practitioners understanding and the adoption of explicit policies, guidelines, or best practices.
arXiv Detail & Related papers (2023-08-07T23:43:32Z) - On the Evaluation of User Privacy in Deep Neural Networks using Timing
Side Channel [14.350301915592027]
We identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in Deep Learning (DL) implementations.
We demonstrate a practical inference-time attack where an adversary with user privilege and hard-label blackbox access to an ML can exploit Class Leakage.
We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage.
arXiv Detail & Related papers (2022-08-01T19:38:16Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - An Automatic Attribute Based Access Control Policy Extraction from
Access Logs [5.142415132534397]
An attribute-based access control (ABAC) model provides a more flexible approach for addressing the authorization needs of complex and dynamic systems.
We present a methodology for automatically learning ABAC policy rules from access logs of a system to simplify the policy development process.
arXiv Detail & Related papers (2020-03-16T15:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.