The Data Enclave Advantage: A New Paradigm for Least-Privileged Data Access in a Zero-Trust World
- URL: http://arxiv.org/abs/2510.09494v1
- Date: Fri, 10 Oct 2025 15:54:58 GMT
- Title: The Data Enclave Advantage: A New Paradigm for Least-Privileged Data Access in a Zero-Trust World
- Authors: Nico Bistolfi, Andreea Georgescu, Dave Hodson,
- Abstract summary: The outdated model of standing permissions has become a critical vulnerability.<n>Current security tools are addressing network and API security, but the challenge of securing granular data access remains.<n>Our approach enables Zero Standing Privilege (ZSP) and Just-in-Time (JIT) principles at the data level.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As cloud infrastructure evolves to support dynamic and distributed workflows, accelerated now by AI-driven processes, the outdated model of standing permissions has become a critical vulnerability. Based on the Cloud Security Alliance (CSA) Top Threats to Cloud Computing Deep Dive 2025 Report, our analysis details how standing permissions cause catastrophic cloud breaches. While current security tools are addressing network and API security, the challenge of securing granular data access remains. Removing standing permissions at the data level is as critical as it is at the network level, especially for companies handling valuable data at scale. In this white paper, we introduce an innovative architecture based on on-demand data enclaves to address this gap directly. Our approach enables Zero Standing Privilege (ZSP) and Just-in-Time (JIT) principles at the data level. We replace static permissions with temporary data contracts that enforce proactive protection. This means separation is built around the data requested on-demand, providing precise access and real time monitoring for individual records instead of datasets. This solution drastically reduces the attack surface, prevents privilege creep, and simplifies auditing, offering a vital path for enterprises to transition to a more secure and resilient data environment.
Related papers
- Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - SkyEye: When Your Vision Reaches Beyond IAM Boundary Scope in AWS Cloud [0.0]
Cloud security has emerged as a primary concern for enterprises.<n> IAM constitutes the critical security backbone of most cloud deployments.<n>SkyEye is a cooperative multi-principal IAM enumeration framework.
arXiv Detail & Related papers (2025-07-01T01:36:52Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.<n>The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.<n>We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Mitigating Data Sharing in Public Cloud using Blockchain [0.0]
We propose a secure data ecosystem in the cloud with the key aspects being Data Rights, Data Sharing, and Data Validation.
This will ensure that existing public cloud-based systems can easily deploy blockchain enhancing trustworthiness and non-repudiation of cloud data.
arXiv Detail & Related papers (2024-04-21T13:12:44Z) - Reflection of Federal Data Protection Standards on Cloud Governance [0.0]
This research focuses on cloud governance by harmoniously combining multiple data security measures with legislative authority.
We present legal aspects aimed at the prevention of data breaches, as well as the technical requirements regarding the implementation of data protection mechanisms.
arXiv Detail & Related papers (2024-02-26T17:04:01Z) - Protecting Sensitive Tabular Data in Hybrid Clouds [0.0]
Regulated industries, such as Healthcare and Finance, are starting to move parts of their data and workloads to the public cloud.
We address the security and performance challenges of big data analytics using a hybrid cloud in a real-life use case from a hospital.
arXiv Detail & Related papers (2023-12-03T11:20:24Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Remote Data Auditing and How it May Affect the Chain of Custody in a
Cloud Environment [0.0]
More and more organizations are relying on outsourcing their data to cloud-based environments.
Law enforcement agencies from the national level down to large city police departments are also using the cloud environment to store data.
This data solution presents in own set of problems in that the outsourced data can become untrustworthy due to the lack of control of the data owners.
arXiv Detail & Related papers (2022-08-26T16:10:34Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.