SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable
Communications
- URL: http://arxiv.org/abs/2204.02321v1
- Date: Tue, 5 Apr 2022 16:26:36 GMT
- Title: SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable
Communications
- Authors: Yuzhu Mao, Zihao Zhao, Meilin Yang, Le Liang, Yang Liu, Wenbo Ding,
Tian Lan, Xiao-Ping Zhang
- Abstract summary: Federated learning (FL) enables edge devices to collaboratively learn a model in a distributed fashion.
We propose a sparsity enabled FL framework with both communication efficiency and bias reduction, termed as SAFARI.
It makes novel use of a similarity among client models to rectify and compensate for bias that is resulted from unreliable communications.
- Score: 23.78596067797334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables edge devices to collaboratively learn a model
in a distributed fashion. Many existing researches have focused on improving
communication efficiency of high-dimensional models and addressing bias caused
by local updates. However, most of FL algorithms are either based on reliable
communications or assume fixed and known unreliability characteristics. In
practice, networks could suffer from dynamic channel conditions and
non-deterministic disruptions, with time-varying and unknown characteristics.
To this end, in this paper we propose a sparsity enabled FL framework with both
communication efficiency and bias reduction, termed as SAFARI. It makes novel
use of a similarity among client models to rectify and compensate for bias that
is resulted from unreliable communications. More precisely, sparse learning is
implemented on local clients to mitigate communication overhead, while to cope
with unreliable communications, a similarity-based compensation method is
proposed to provide surrogates for missing model updates. We analyze SAFARI
under bounded dissimilarity and with respect to sparse models. It is
demonstrated that SAFARI under unreliable communications is guaranteed to
converge at the same rate as the standard FedAvg with perfect communications.
Implementations and evaluations on CIFAR-10 dataset validate the effectiveness
of SAFARI by showing that it can achieve the same convergence speed and
accuracy as FedAvg with perfect communications, with up to 80% of the model
weights being pruned and a high percentage of client updates missing in each
round.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Parametric Feature Transfer: One-shot Federated Learning with Foundation
Models [14.97955440815159]
In one-shot federated learning, clients collaboratively train a global model in a single round of communication.
This paper introduces FedPFT, a methodology that harnesses the transferability of foundation models to enhance both accuracy and communication efficiency in one-shot FL.
arXiv Detail & Related papers (2024-02-02T19:34:46Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - FedET: A Communication-Efficient Federated Class-Incremental Learning
Framework Based on Enhanced Transformer [42.19443600254834]
We propose a novel framework, Federated Enhanced Transformer (FedET), which simultaneously achieves high accuracy and low communication cost.
FedET uses Enhancer, a tiny module, to absorb and communicate new knowledge.
We show that FedET's average accuracy on representative benchmark datasets is 14.1% higher than the state-of-the-art method.
arXiv Detail & Related papers (2023-06-27T10:00:06Z) - Asynchronous Online Federated Learning with Reduced Communication
Requirements [6.282767337715445]
We propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy.
By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient.
We conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets.
arXiv Detail & Related papers (2023-03-27T14:06:05Z) - Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification
in the Presence of Data Heterogeneity [60.791736094073]
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
We propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD.
The proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-02-19T17:42:35Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - RC-SSFL: Towards Robust and Communication-efficient Semi-supervised
Federated Learning System [25.84191221776459]
Federated Learning (FL) is an emerging decentralized artificial intelligence paradigm.
Current systems rely heavily on a strong assumption: all clients have a wealth of ground truth labeled data.
We present a practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL) system design.
arXiv Detail & Related papers (2020-12-08T14:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.