Scotch: An Efficient Secure Computation Framework for Secure Aggregation
- URL: http://arxiv.org/abs/2201.07730v1
- Date: Wed, 19 Jan 2022 17:16:35 GMT
- Title: Scotch: An Efficient Secure Computation Framework for Secure Aggregation
- Authors: Arup Mondal, Yash More, Prashanthi Ramachandran, Priyam Panda,
Harpreet Virk, Debayan Gupta
- Abstract summary: Federated learning enables multiple data owners to jointly train a machine learning model without revealing their private datasets.
A malicious aggregation server might use the model parameters to derive sensitive information about the training dataset used.
We propose textscScotch, a decentralized textitm-party secure-computation framework for federated aggregation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning enables multiple data owners to jointly train a machine
learning model without revealing their private datasets. However, a malicious
aggregation server might use the model parameters to derive sensitive
information about the training dataset used. To address such leakage,
differential privacy and cryptographic techniques have been investigated in
prior work, but these often result in large communication overheads or impact
model performance. To mitigate this centralization of power, we propose
\textsc{Scotch}, a decentralized \textit{m-party} secure-computation framework
for federated aggregation that deploys MPC primitives, such as \textit{secret
sharing}. Our protocol is simple, efficient, and provides strict privacy
guarantees against curious aggregators or colluding data-owners with minimal
communication overheads compared to other existing \textit{state-of-the-art}
privacy-preserving federated learning frameworks. We evaluate our framework by
performing extensive experiments on multiple datasets with promising results.
\textsc{Scotch} can train the standard MLP NN with the training dataset split
amongst 3 participating users and 3 aggregating servers with 96.57\% accuracy
on MNIST, and 98.40\% accuracy on the Extended MNIST (digits) dataset, while
providing various optimizations.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models [62.838689691468666]
We propose Federated Black-Box Prompt Tuning (Fed-BBPT) to optimally harness each local dataset.
Fed-BBPT capitalizes on a central server that aids local users in collaboratively training a prompt generator through regular aggregation.
Relative to extensive fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM storage and fine-tuning on local machines.
arXiv Detail & Related papers (2023-10-04T19:30:49Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Differentially Private Label Protection in Split Learning [20.691549091238965]
Split learning is a distributed training framework that allows multiple parties to jointly train a machine learning model over partitioned data.
Recent works showed that the implementation of split learning suffers from severe privacy risks that a semi-honest adversary can easily reconstruct labels.
We propose textsfTPSL (Transcript Private Split Learning), a generic gradient based split learning framework that provides provable differential privacy guarantee.
arXiv Detail & Related papers (2022-03-04T00:35:03Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Evaluation Framework For Large-scale Federated Learning [10.127616622630514]
Federated learning is proposed as a machine learning setting to enable distributed edge devices, such as mobile phones, to collaboratively learn a shared prediction model.
In this paper, we introduce a framework designed for large-scale federated learning which consists of approaches to generating dataset and modular evaluation framework.
arXiv Detail & Related papers (2020-03-03T15:12:13Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.