When Decentralized Optimization Meets Federated Learning
- URL: http://arxiv.org/abs/2306.02570v1
- Date: Mon, 5 Jun 2023 03:51:14 GMT
- Title: When Decentralized Optimization Meets Federated Learning
- Authors: Hongchang Gao, My T. Thai, Jie Wu
- Abstract summary: Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
- Score: 41.58479981773202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a new learning paradigm for extracting knowledge from
distributed data. Due to its favorable properties in preserving privacy and
saving communication costs, it has been extensively studied and widely applied
to numerous data analysis applications. However, most existing federated
learning approaches concentrate on the centralized setting, which is vulnerable
to a single-point failure. An alternative strategy for addressing this issue is
the decentralized communication topology. In this article, we systematically
investigate the challenges and opportunities when renovating decentralized
optimization for federated learning. In particular, we discussed them from the
model, data, and communication sides, respectively, which can deepen our
understanding about decentralized federated learning.
Related papers
- Robustness of Decentralised Learning to Nodes and Data Disruption [4.062458976723649]
We study the effect of nodes' disruption on the collective learning process.
Our results show that decentralised learning processes are remarkably robust to network disruption.
arXiv Detail & Related papers (2024-05-03T12:14:48Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - Towards Privacy-Aware Causal Structure Learning in Federated Setting [27.5652887311069]
We study a privacy-aware causal structure learning problem in the federated setting.
We propose a novel Federated PC (FedPC) algorithm with two new strategies for preserving data privacy without centralizing data.
arXiv Detail & Related papers (2022-11-13T14:54:42Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - Exploring Semantic Attributes from A Foundation Model for Federated
Learning of Disjoint Label Spaces [46.59992662412557]
In this work, we consider transferring mid-level semantic knowledge (such as attribute) which is not sensitive to specific objects of interest.
We formulate a new Federated Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at multiple local clients.
To improve model discriminative ability, we propose to explore semantic knowledge augmentation from external knowledge.
arXiv Detail & Related papers (2022-08-29T10:05:49Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Communication-Computation Efficient Secure Aggregation for Federated
Learning [23.924656276456503]
Federated learning is a way to train neural networks using data distributed over multiple nodes without the need for the nodes to share data.
A recent solution based on the secure aggregation primitive enabled privacy-preserving federated learning, but at the expense of significant extra communication/computational resources.
We propose communication-computation efficient secure aggregation which substantially reduces the amount of communication/computational resources.
arXiv Detail & Related papers (2020-12-10T03:17:50Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.