Reinforcement Learning on Encrypted Data
- URL: http://arxiv.org/abs/2109.08236v1
- Date: Thu, 16 Sep 2021 21:59:37 GMT
- Title: Reinforcement Learning on Encrypted Data
- Authors: Alberto Jesu, Victor-Alexandru Darvariu, Alessandro Staffolani,
Rebecca Montanari, Mirco Musolesi
- Abstract summary: We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
- Score: 58.39270571778521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing number of applications of Reinforcement Learning (RL) in
real-world domains has led to the development of privacy-preserving techniques
due to the inherently sensitive nature of data. Most existing works focus on
differential privacy, in which information is revealed in the clear to an agent
whose learned model should be robust against information leakage to malicious
third parties. Motivated by use cases in which only encrypted data might be
shared, such as information from sensitive sites, in this work we consider
scenarios in which the inputs themselves are sensitive and cannot be revealed.
We develop a simple extension to the MDP framework which provides for the
encryption of states. We present a preliminary, experimental study of how a DQN
agent trained on encrypted states performs in environments with discrete and
continuous state spaces. Our results highlight that the agent is still capable
of learning in small state spaces even in presence of non-deterministic
encryption, but performance collapses in more complex environments.
Related papers
- Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Application of Data Encryption in Chinese Named Entity Recognition [11.084360853065736]
We propose an encryption learning framework to address the problems of data leakage and inconvenient disclosure of sensitive data.
We introduce multiple encryption algorithms to encrypt training data in the named entity recognition task for the first time.
The experimental results show that the encryption method achieves satisfactory results.
arXiv Detail & Related papers (2022-08-31T04:20:37Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - TenSEAL: A Library for Encrypted Tensor Operations Using Homomorphic
Encryption [0.0]
We present TenSEAL, an open-source library for Privacy-Preserving Machine Learning using Homomorphic Encryption.
We show that an encrypted convolutional neural network can be evaluated in less than a second, using less than half a megabyte of communication.
arXiv Detail & Related papers (2021-04-07T14:32:38Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Secure Sum Outperforms Homomorphic Encryption in (Current) Collaborative
Deep Learning [7.690774882108066]
We discuss methods for training neural networks on the joint data of different data owners, that keep each party's input confidential.
We show that a less complex and computationally less expensive secure sum protocol exhibits superior properties in terms of both collusion-resistance and runtime.
arXiv Detail & Related papers (2020-06-02T23:03:32Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.