Privacy-Preserving Chaotic Extreme Learning Machine with Fully
Homomorphic Encryption
- URL: http://arxiv.org/abs/2208.02587v1
- Date: Thu, 4 Aug 2022 11:29:52 GMT
- Title: Privacy-Preserving Chaotic Extreme Learning Machine with Fully
Homomorphic Encryption
- Authors: Syed Imtiaz Ahamed and Vadlamani Ravi
- Abstract summary: We propose a Chaotic Extreme Learning Machine and its encrypted form using Fully Homomorphic Encryption.
Our proposed method has performed either better or similar to the Traditional Extreme Learning Machine on most of the datasets.
- Score: 5.010425616264462
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Machine Learning and Deep Learning Models require a lot of data for the
training process, and in some scenarios, there might be some sensitive data,
such as customer information involved, which the organizations might be
hesitant to outsource for model building. Some of the privacy-preserving
techniques such as Differential Privacy, Homomorphic Encryption, and Secure
Multi-Party Computation can be integrated with different Machine Learning and
Deep Learning algorithms to provide security to the data as well as the model.
In this paper, we propose a Chaotic Extreme Learning Machine and its encrypted
form using Fully Homomorphic Encryption where the weights and biases are
generated using a logistic map instead of uniform distribution. Our proposed
method has performed either better or similar to the Traditional Extreme
Learning Machine on most of the datasets.
Related papers
- Learning in the Dark: Privacy-Preserving Machine Learning using Function Approximation [1.8907108368038215]
Learning in the Dark is a privacy-preserving machine learning model that can classify encrypted images with high accuracy.
It is capable of performing high accuracy predictions by performing computations directly on encrypted data.
arXiv Detail & Related papers (2023-09-15T06:45:58Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach [0.9831489366502302]
We propose a robust representation learning framework for privacy-preserving machine learning (ppML)
Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data.
With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form.
arXiv Detail & Related papers (2023-09-08T16:41:25Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - Privacy-Preserving Wavelet Wavelet Neural Network with Fully Homomorphic
Encryption [5.010425616264462]
Privacy-Preserving Machine Learning (PPML) aims to protect the privacy and provide security to the data used in building Machine Learning models.
We propose a fully homomorphic encrypted wavelet neural network to protect privacy and at the same time not compromise on the efficiency of the model.
arXiv Detail & Related papers (2022-05-26T10:40:31Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Additively Homomorphical Encryption based Deep Neural Network for
Asymmetrically Collaborative Machine Learning [12.689643742151516]
preserving machine learning creates a constraint which limits further applications in finance sectors.
We propose a new practical scheme of collaborative machine learning that one party owns data, but another party owns labels only.
Our experiments on different datasets demonstrate not only stable training without accuracy, but also more than 100 times speedup.
arXiv Detail & Related papers (2020-07-14T06:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.