Props for Machine-Learning Security
- URL: http://arxiv.org/abs/2410.20522v1
- Date: Sun, 27 Oct 2024 17:05:48 GMT
- Title: Props for Machine-Learning Security
- Authors: Ari Juels, Farinaz Koushanfar,
- Abstract summary: Props are protected pipelines for authenticated, privacy-preserving access to deep-web data for machine learning (ML)
Props also enable privacy-preserving trustworthy and forms of inference, allowing for safe use of sensitive data in ML applications.
- Score: 19.71019731367118
- License:
- Abstract: We propose protected pipelines or props for short, a new approach for authenticated, privacy-preserving access to deep-web data for machine learning (ML). By permitting secure use of vast sources of deep-web data, props address the systemic bottleneck of limited high-quality training data in ML development. Props also enable privacy-preserving and trustworthy forms of inference, allowing for safe use of sensitive data in ML applications. Props are practically realizable today by leveraging privacy-preserving oracle systems initially developed for blockchain applications.
Related papers
- FL-DECO-BC: A Privacy-Preserving, Provably Secure, and Provenance-Preserving Federated Learning Framework with Decentralized Oracles on Blockchain for VANETs [0.0]
Vehicular Ad-Hoc Networks (VANETs) hold immense potential for improving traffic safety and efficiency.
Traditional centralized approaches for machine learning in VANETs raise concerns about data privacy and security.
This paper proposes FL-DECO-BC as a novel privacy-preserving, provably secure, and provenance-preserving federated learning framework specifically designed for VANETs.
arXiv Detail & Related papers (2024-07-30T19:09:10Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption [2.611778281107039]
Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
arXiv Detail & Related papers (2024-01-26T13:12:52Z) - A Survey of Data Security: Practices from Cybersecurity and Challenges of Machine Learning [6.086388464254366]
Machine learning (ML) is increasingly being deployed in critical systems.
The data dependence of ML makes securing data used to train and test ML-enabled systems of utmost importance.
Data science and cybersecurity domains adhere to their own set of skills and terminologies.
arXiv Detail & Related papers (2023-10-06T18:15:35Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - SoK: Privacy Preserving Machine Learning using Functional Encryption:
Opportunities and Challenges [1.2183405753834562]
We focus on Inner-product-FE and Quadratic-FE-based machine learning models for the privacy-preserving machine learning (PPML) applications.
To the best of our knowledge, this is the first work to systematize FE-based PPML approaches.
arXiv Detail & Related papers (2022-04-11T14:15:36Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - secureTF: A Secure TensorFlow Framework [1.1006321791711173]
secureTF is a distributed machine learning framework based on the onflow for the cloud infrastructure.
SecureTF supports unmodified applications, while providing end-to-end security for the input data, ML model, and application code.
This paper reports on our experiences about the system design choices and the system deployment in production use-cases.
arXiv Detail & Related papers (2021-01-20T16:36:53Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.