ABG: A Multi-Party Mixed Protocol Framework for Privacy-Preserving
Cooperative Learning
- URL: http://arxiv.org/abs/2202.02928v2
- Date: Thu, 10 Feb 2022 02:36:18 GMT
- Title: ABG: A Multi-Party Mixed Protocol Framework for Privacy-Preserving
Cooperative Learning
- Authors: Hao Wang, Zhi Li, Chunpeng Ge, Willy Susilo
- Abstract summary: We propose a privacy-preserving multi-party cooperative learning system, which allows different data owners to cooperate in machine learning.
We also design specific privacy-preserving computation protocols for some typical machine learning methods such as logistic regression and neural networks.
The experiments indicate that ABG$n$ has excellent performance, especially in the network environment with low latency.
- Score: 13.212198032364363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooperative learning, that enables two or more data owners to jointly train a
model, has been widely adopted to solve the problem of insufficient training
data in machine learning. Nowadays, there is an urgent need for institutions
and organizations to train a model cooperatively while keeping each other's
data privately. To address the issue of privacy-preserving in collaborative
learning, secure outsourced computation and federated learning are two typical
methods. Nevertheless, there are many drawbacks for these two methods when they
are leveraged in cooperative learning. For secure outsourced computation,
semi-honest servers need to be introduced. Once the outsourced servers collude
or perform other active attacks, the privacy of data will be disclosed. For
federated learning, it is difficult to apply to the scenarios where vertically
partitioned data are distributed over multiple parties. In this work, we
propose a multi-party mixed protocol framework, ABG$^n$, which effectively
implements arbitrary conversion between Arithmetic sharing (A), Boolean sharing
(B) and Garbled-Circuits sharing (G) for $n$-party scenarios. Based on ABG$^n$,
we design a privacy-preserving multi-party cooperative learning system, which
allows different data owners to cooperate in machine learning in terms of data
security and privacy-preserving. Additionally, we design specific
privacy-preserving computation protocols for some typical machine learning
methods such as logistic regression and neural networks. Compared with previous
work, the proposed method has a wider scope of application and does not need to
rely on additional servers. Finally, we evaluate the performance of ABG$^n$ on
the local setting and on the public cloud setting. The experiments indicate
that ABG$^n$ has excellent performance, especially in the network environment
with low latency.
Related papers
- Over-the-Air Federated Learning In Broadband Communication [0.0]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that operates at the wireless edge.
Some rely on secure multiparty computation, which can be vulnerable to inference attacks.
Others employ differential privacy, but this may lead to decreased test accuracy when dealing with a large number of parties contributing small amounts of data.
arXiv Detail & Related papers (2023-06-03T00:16:27Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - A Graph Federated Architecture with Privacy Preserving Learning [48.24121036612076]
Federated learning involves a central processor that works with multiple agents to find a global model.
The current architecture of a server connected to multiple clients is highly sensitive to communication failures and computational overloads at the server.
We use cryptographic and differential privacy concepts to privatize the federated learning algorithm that we extend to the graph structure.
arXiv Detail & Related papers (2021-04-26T09:51:24Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.