S3ML: A Secure Serving System for Machine Learning Inference
- URL: http://arxiv.org/abs/2010.06212v1
- Date: Tue, 13 Oct 2020 07:41:13 GMT
- Title: S3ML: A Secure Serving System for Machine Learning Inference
- Authors: Junming Ma, Chaofan Yu, Aihui Zhou, Bingzhe Wu, Xibin Wu, Xingyu Chen,
Xiangqun Chen, Lei Wang, Donggang Cao
- Abstract summary: We present S3ML, a secure serving system for machine learning inference.
S3ML runs machine learning models in Intel SGX enclaves to protect users' privacy.
- Score: 15.994551402176189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present S3ML, a secure serving system for machine learning inference in
this paper. S3ML runs machine learning models in Intel SGX enclaves to protect
users' privacy. S3ML designs a secure key management service to construct
flexible privacy-preserving server clusters and proposes novel SGX-aware load
balancing and scaling methods to satisfy users' Service-Level Objectives. We
have implemented S3ML based on Kubernetes as a low-overhead, high-available,
and scalable system. We demonstrate the system performance and effectiveness of
S3ML through extensive experiments on a series of widely-used models.
Related papers
- Props for Machine-Learning Security [19.71019731367118]
Props are protected pipelines for authenticated, privacy-preserving access to deep-web data for machine learning (ML)
Props also enable privacy-preserving trustworthy and forms of inference, allowing for safe use of sensitive data in ML applications.
arXiv Detail & Related papers (2024-10-27T17:05:48Z) - SWITCH: An Exemplar for Evaluating Self-Adaptive ML-Enabled Systems [1.2277343096128712]
Machine Learning-Enabled Systems (MLS) is crucial for maintaining Quality of Service (QoS)
The Machine Learning Model Balancer is a concept that addresses these uncertainties by facilitating dynamic ML model switching.
This paper introduces SWITCH, an exemplar developed to enhance self-adaptive capabilities in such systems.
arXiv Detail & Related papers (2024-02-09T11:56:44Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Class-Level Confidence Based 3D Semi-Supervised Learning [18.95161296147023]
We show that unlabeled data class-level confidence can represent the learning status in the 3D imbalanced dataset.
Our method significantly outperforms state-of-the-art counterparts for both 3D SSL classification and detection tasks.
arXiv Detail & Related papers (2022-10-18T20:13:28Z) - Special Session: Towards an Agile Design Methodology for Efficient,
Reliable, and Secure ML Systems [12.53463551929214]
Modern Machine Learning systems are expected to be highly reliable against hardware failures as well as secure against adversarial and IP stealing attacks.
Privacy concerns are also becoming a first-order issue.
This article summarizes the main challenges in agile development of efficient, reliable and secure ML systems.
arXiv Detail & Related papers (2022-04-18T17:29:46Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Plinius: Secure and Persistent Machine Learning Model Training [2.1375296464337086]
Persistent memory (PM) is resilient to power loss (unlike DRAM)
We present PLINIUS, a framework using Intel SGX enclaves for secure training of ML models and PM for fault tolerance guarantees.
arXiv Detail & Related papers (2021-04-07T08:35:59Z) - SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning [58.26384597768118]
SemiNLL is a versatile framework that combines SS strategies and SSL models in an end-to-end manner.
Our framework can absorb various SS strategies and SSL backbones, utilizing their power to achieve promising performance.
arXiv Detail & Related papers (2020-12-02T01:49:47Z) - Towards Differentially Private Text Representations [52.64048365919954]
We develop a new deep learning framework under an untrusted server setting.
For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $epsilon$ on accuracy.
Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols.
arXiv Detail & Related papers (2020-06-25T04:42:18Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.