ESMFL: Efficient and Secure Models for Federated Learning
- URL: http://arxiv.org/abs/2009.01867v2
- Date: Wed, 3 Mar 2021 19:45:00 GMT
- Title: ESMFL: Efficient and Secure Models for Federated Learning
- Authors: Sheng Lin, Chenghong Wang, Hongjia Li, Jieren Deng, Yanzhi Wang,
Caiwen Ding
- Abstract summary: We propose a privacy-preserving method for the federated learning distributed system, operated on Intel Software Guard Extensions.
We reduce the commutation cost by sparsification and it can achieve reasonable accuracy with different model architectures.
- Score: 28.953644581089495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, Deep Neural Networks are widely applied to various domains.
However, massive data collection required for deep neural network reveals the
potential privacy issues and also consumes large mounts of communication
bandwidth. To address these problems, we propose a privacy-preserving method
for the federated learning distributed system, operated on Intel Software Guard
Extensions, a set of instructions that increase the security of application
code and data. Meanwhile, the encrypted models make the transmission overhead
larger. Hence, we reduce the commutation cost by sparsification and it can
achieve reasonable accuracy with different model architectures.
Related papers
- Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System [24.39699808493429]
Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data.
Privacy concerns arise as the aggregated local models on the server may reveal sensitive personal information by inversion attacks.
We present FedML-HE, the first practical federated learning system with efficient HE-based secure model aggregation.
arXiv Detail & Related papers (2023-03-20T02:44:35Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - An Online Ensemble Learning Model for Detecting Attacks in Wireless
Sensor Networks [0.0]
We develop an intelligent, efficient, and updatable intrusion detection system by applying an important machine learning concept known as ensemble learning.
In this paper, we examine the application of different homogeneous and heterogeneous online ensembles in sensory data analysis.
Among the proposed novel online ensembles, both the heterogeneous ensemble consisting of an Adaptive Random Forest (ARF) combined with the Hoeffding Adaptive Tree (HAT) algorithm and the homogeneous ensemble HAT made up of 10 models achieved higher detection rates of 96.84% and 97.2%, respectively.
arXiv Detail & Related papers (2022-04-28T23:10:47Z) - Towards Privacy-Preserving Neural Architecture Search [7.895707607608013]
PP-NAS is a privacy-preserving neural architecture search framework based on secure multi-party computation.
PP-NAS outsources the NAS task to two non-colluding cloud servers for making full advantage of mixed protocols design.
We develop a new alternative to approximate the Softmax function over secret shares, which bypasses the limitation of approximating exponential operations in Softmax while improving accuracy.
arXiv Detail & Related papers (2022-04-22T23:44:45Z) - An Interpretable Federated Learning-based Network Intrusion Detection
Framework [9.896258523574424]
FEDFOREST is a novel learning-based NIDS that combines interpretable Gradient Boosting Decision Tree (GBDT) and Federated Learning (FL) framework.
FEDFOREST is composed of multiple clients that extract local cyberattack data features for the server to train models and detect intrusions.
Experiments on 4 cyberattack datasets demonstrate that FEDFOREST is effective, efficient, interpretable, and extendable.
arXiv Detail & Related papers (2022-01-10T02:12:32Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Sphynx: ReLU-Efficient Network Design for Private Inference [49.73927340643812]
We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model.
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality.
This paper presents Sphynx, a ReLU-efficient network design method based on micro-search strategies for convolutional cell design.
arXiv Detail & Related papers (2021-06-17T18:11:10Z) - All at Once Network Quantization via Collaborative Knowledge Transfer [56.95849086170461]
We develop a novel collaborative knowledge transfer approach for efficiently training the all-at-once quantization network.
Specifically, we propose an adaptive selection strategy to choose a high-precision enquoteteacher for transferring knowledge to the low-precision student.
To effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.
arXiv Detail & Related papers (2021-03-02T03:09:03Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.