Securing Generative AI in Healthcare: A Zero-Trust Architecture Powered by Confidential Computing on Google Cloud
- URL: http://arxiv.org/abs/2511.11836v1
- Date: Fri, 14 Nov 2025 19:56:52 GMT
- Title: Securing Generative AI in Healthcare: A Zero-Trust Architecture Powered by Confidential Computing on Google Cloud
- Authors: Adaobi Amanna, Ishana Shinde,
- Abstract summary: Confidential Zero-Trust Framework (CZF) is a security paradigm that combines Zero-Trust Architecture for granular access control with the hardware-enforced data isolation of Confidential Computing.<n>CZF provides a defense-in-depth architecture where data remains encrypted while in-use within a hardware-based Trusted Execution Environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of Generative Artificial Intelligence (GenAI) in healthcare is impeded by significant security challenges unaddressed by traditional frameworks, precisely the data-in-use gap where sensitive patient data and proprietary AI models are exposed during active processing. To address this, the paper proposes the Confidential Zero-Trust Framework (CZF), a novel security paradigm that synergistically combines Zero-Trust Architecture for granular access control with the hardware-enforced data isolation of Confidential Computing. We detailed a multi-tiered architectural blueprint for implementing the CZF on Google Cloud and analyzed its efficacy against real-world threats. The CZF provides a defense-in-depth architecture where data remains encrypted while in-use within a hardware-based Trusted Execution Environment (TEE). The framework's use of remote attestation offers cryptographic proof of workload integrity, transforming compliance from a procedural exercise into a verifiable technical fact and enabling secure, multi-party collaborations previously blocked by security and intellectual property concerns. By closing the data-in-use gap and enforcing Zero-Trust principles, the CZF provides a robust and verifiable framework that establishes the necessary foundation of trust to enable the responsible adoption of transformative AI technologies in healthcare.
Related papers
- MemTrust: A Zero-Trust Architecture for Unified AI Memory System [1.6221135438213565]
centralization creates a trust crisis where users must entrust cloud providers with sensitive digital memory data.<n>We propose a five-layer architecture abstracting common functional components of AI memory systems.<n>Based on this, we design MemTrust, a hardware-backed zero-trust architecture that provides cryptographic guarantees across all layers.
arXiv Detail & Related papers (2026-01-11T17:37:33Z) - Confidential Computing for Cloud Security: Exploring Hardware based Encryption Using Trusted Execution Environments [0.0]
Cloud computing has created a huge challenge of security, especially in terms of safeguarding sensitive data.<n>In response to this problem, Confidential Computing has been a tool seeking to secure data in processing by usage of hardware-based Trusted Execution Environments (TEEs)
arXiv Detail & Related papers (2025-11-06T17:03:33Z) - Bridging the Mobile Trust Gap: A Zero Trust Framework for Consumer-Facing Applications [51.56484100374058]
This paper proposes an extended Zero Trust model designed for mobile applications operating in untrusted, user-controlled environments.<n>Using a design science methodology, the study introduced a six-pillar framework that supports runtime enforcement of trust.<n>The proposed model offers a practical and standards-aligned approach to securing mobile applications beyond pre-deployment controls.
arXiv Detail & Related papers (2025-08-20T18:42:36Z) - Generative AI-Empowered Secure Communications in Space-Air-Ground Integrated Networks: A Survey and Tutorial [107.26005706569498]
Space-air-ground integrated networks (SAGINs) face unprecedented security challenges due to their inherent characteristics.<n>Generative AI (GAI) is a transformative approach that can safeguard SAGIN security by synthesizing data, understanding semantics, and making autonomous decisions.
arXiv Detail & Related papers (2025-08-04T01:42:57Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Federated Learning-Enhanced Blockchain Framework for Privacy-Preserving Intrusion Detection in Industrial IoT [0.0]
Industrial Internet of Things (IIoT) systems have become integral to smart manufacturing, yet their growing connectivity has exposed them to significant cybersecurity threats.<n>Traditional intrusion detection systems (IDS) often rely on centralized architectures that raise concerns over data privacy, latency, and single points of failure.<n>We propose a novel Federated Learning-Enhanced Framework (FL-BCID) for privacy-preserving intrusion detection tailored for IIoT environments.
arXiv Detail & Related papers (2025-05-21T11:11:44Z) - Trusted Compute Units: A Framework for Chained Verifiable Computations [41.94295877935867]
This paper introduces the Trusted Compute Unit (TCU), a unifying framework that enables composable and interoperable computations across heterogeneous technologies.<n>By enabling secure off-chain interactions without incurring on-chain confirmation delays or gas fees, TCUs significantly improve system performance and scalability.
arXiv Detail & Related papers (2025-04-22T09:01:55Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [43.253676241213626]
We propose an architecture for blockchain-based PAISs to preserve confidentiality and transparency.<n>Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.<n>We assess the security of our solution through a systematic threat model analysis and evaluate its practical feasibility.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - Towards Secure and Private AI: A Framework for Decentralized Inference [14.526663289437584]
Large multimodal foundational models present challenges in scalability, reliability, and potential misuse.<n>Decentralized systems offer a solution by distributing workload and mitigating central points of failure.<n>We address these challenges with a comprehensive framework designed for responsible AI development.
arXiv Detail & Related papers (2024-07-28T05:09:17Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.