Over-the-Air Federated Learning In Broadband Communication
- URL: http://arxiv.org/abs/2306.01963v1
- Date: Sat, 3 Jun 2023 00:16:27 GMT
- Title: Over-the-Air Federated Learning In Broadband Communication
- Authors: Wayne Lemieux, Raphael Pinard, Mitra Hassani
- Abstract summary: Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that operates at the wireless edge.
Some rely on secure multiparty computation, which can be vulnerable to inference attacks.
Others employ differential privacy, but this may lead to decreased test accuracy when dealing with a large number of parties contributing small amounts of data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is a privacy-preserving distributed machine learning
paradigm that operates at the wireless edge. It enables clients to collaborate
on model training while keeping their data private from adversaries and the
central server. However, current FL approaches have limitations. Some rely on
secure multiparty computation, which can be vulnerable to inference attacks.
Others employ differential privacy, but this may lead to decreased test
accuracy when dealing with a large number of parties contributing small amounts
of data. To address these issues, this paper proposes a novel approach that
integrates federated learning seamlessly into the inner workings of MIMO
(Multiple-Input Multiple-Output) systems.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Learning in MIMO Satellite Broadcast System [0.0]
Federated learning (FL) is a type of distributed machine learning at the wireless edge that preserves the privacy of clients' data from adversaries and even the central server.
Existing federated learning approaches either use (i. secure multiparty computation (SMC) which is vulnerable to inference or (ii. differential privacy which may decrease the test accuracy given a large number of parties with relatively small amounts of data each)
arXiv Detail & Related papers (2023-03-29T11:33:51Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - ABG: A Multi-Party Mixed Protocol Framework for Privacy-Preserving
Cooperative Learning [13.212198032364363]
We propose a privacy-preserving multi-party cooperative learning system, which allows different data owners to cooperate in machine learning.
We also design specific privacy-preserving computation protocols for some typical machine learning methods such as logistic regression and neural networks.
The experiments indicate that ABG$n$ has excellent performance, especially in the network environment with low latency.
arXiv Detail & Related papers (2022-02-07T03:57:57Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.