GuaranTEE: Towards Attestable and Private ML with CCA
- URL: http://arxiv.org/abs/2404.00190v1
- Date: Fri, 29 Mar 2024 23:07:29 GMT
- Title: GuaranTEE: Towards Attestable and Private ML with CCA
- Authors: Sandra Siby, Sina Abdollahi, Mohammad Maheri, Marios Kogias, Hamed Haddadi,
- Abstract summary: GuaranTEE is a framework to provide attestable private machine learning on the edge.
We evaluate CCA's feasibility to deploy ML models by developing, evaluating, and openly releasing a prototype.
- Score: 6.024889136631505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine-learning (ML) models are increasingly being deployed on edge devices to provide a variety of services. However, their deployment is accompanied by challenges in model privacy and auditability. Model providers want to ensure that (i) their proprietary models are not exposed to third parties; and (ii) be able to get attestations that their genuine models are operating on edge devices in accordance with the service agreement with the user. Existing measures to address these challenges have been hindered by issues such as high overheads and limited capability (processing/secure memory) on edge devices. In this work, we propose GuaranTEE, a framework to provide attestable private machine learning on the edge. GuaranTEE uses Confidential Computing Architecture (CCA), Arm's latest architectural extension that allows for the creation and deployment of dynamic Trusted Execution Environments (TEEs) within which models can be executed. We evaluate CCA's feasibility to deploy ML models by developing, evaluating, and openly releasing a prototype. We also suggest improvements to CCA to facilitate its use in protecting the entire ML deployment pipeline on edge devices.
Related papers
- An Early Experience with Confidential Computing Architecture for On-Device Model Protection [6.024889136631505]
Arm Confidential Computing Architecture (CCA) is a new Arm extension for on-device machine learning (ML)
In this paper, we evaluate the performance-privacy trade-offs of deploying models within CCA.
Our framework can successfully protect the model against membership inference attack by an 8.3% reduction in the adversary's success rate.
arXiv Detail & Related papers (2025-04-11T13:21:33Z) - THEMIS: Towards Practical Intellectual Property Protection for Post-Deployment On-Device Deep Learning Models [36.02405283730231]
On-device deep learning (DL) has rapidly gained adoption in mobile apps, offering the benefits of offline model inference and user privacy preservation over cloud-based approaches.
It inevitably stores models on user devices, introducing new vulnerabilities, particularly model-stealing attacks and intellectual property infringement.
In this paper, we propose THEMIS, an automatic tool that lifts the read-only restriction of on-device DL models by reconstructing their writable counterparts.
arXiv Detail & Related papers (2025-03-31T05:58:57Z) - TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models [12.253529209143197]
TSDP is a method that protects privacy-sensitive weights within TEEs and offloads insensitive weights to GPUs.
We introduce a novel partition before training strategy, which effectively separates privacy-sensitive weights from other components of the model.
Our evaluation demonstrates that our approach can offer full model protection with a computational cost reduced by a factor of 10.
arXiv Detail & Related papers (2024-11-15T04:52:11Z) - A Novel Access Control and Privacy-Enhancing Approach for Models in Edge Computing [0.26107298043931193]
We propose a novel model access control method tailored for edge computing environments.
This method leverages image style as a licensing mechanism, embedding style recognition into the model's operational framework.
By restricting the input data to the edge model, this approach not only prevents attackers from gaining unauthorized access to the model but also enhances the privacy of data on terminal devices.
arXiv Detail & Related papers (2024-11-06T11:37:30Z) - A Practical and Privacy-Preserving Framework for Real-World Large Language Model Services [8.309281698695381]
Large language models (LLMs) have demonstrated exceptional capabilities in text understanding and generation.
Individuals often rely on online AI as a Service (AI) provided by LLM companies.
This business model poses significant privacy risks, as service providers may exploit users' trace patterns and behavioral data.
We propose a practical and privacy-preserving framework that ensures user anonymity by preventing service providers from linking requests to the individuals who submit them.
arXiv Detail & Related papers (2024-11-03T07:40:28Z) - CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment [43.53211005936295]
CoreGuard is a computation- and communication-efficient model protection approach against model stealing on edge devices.
We show that CoreGuard achieves the same security protection as the black-box security guarantees with negligible overhead.
arXiv Detail & Related papers (2024-10-16T08:14:24Z) - Enhancing Physical Layer Communication Security through Generative AI with Mixture of Experts [80.0638227807621]
generative artificial intelligence (GAI) models have demonstrated superiority over conventional AI methods.
MoE, which uses multiple expert models for prediction through a gate mechanism, proposes possible solutions.
arXiv Detail & Related papers (2024-05-07T11:13:17Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations [76.19419888353586]
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
We present our efforts to create and deploy a library of detectors: compact and easy-to-build classification models that provide labels for various harms.
arXiv Detail & Related papers (2024-03-09T21:07:16Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - SODA: Protecting Proprietary Information in On-Device Machine Learning
Models [5.352699766206808]
We present an end-to-end framework, SODA, for deploying and serving on edge devices while defending against adversarial usage.
Our results demonstrate that SODA can detect adversarial usage with 89% accuracy in less than 50 queries with minimal impact on service performance, latency, and storage.
arXiv Detail & Related papers (2023-12-22T20:04:36Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.