Sparse Decentralized Federated Learning
- URL: http://arxiv.org/abs/2308.16671v2
- Date: Sat, 15 Mar 2025 02:52:25 GMT
- Title: Sparse Decentralized Federated Learning
- Authors: Shan Sha, Shenglong Zhou, Lingchen Kong, Geoffrey Ye Li,
- Abstract summary: Decentralized Federated Learning (DFL) enables collaborative model training without a central server but faces challenges in efficiency, stability, and trustworthiness.<n>We introduce a sparsity constraint on the shared model, leading to Sparse DFL (SDFL), and propose a novel algorithm, CEPS.<n> Numerical experiments validate the effectiveness of the proposed algorithm in improving communication and efficiency while maintaining a high level of trustworthiness.
- Score: 35.32297764027417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized Federated Learning (DFL) enables collaborative model training without a central server but faces challenges in efficiency, stability, and trustworthiness due to communication and computational limitations among distributed nodes. To address these critical issues, we introduce a sparsity constraint on the shared model, leading to Sparse DFL (SDFL), and propose a novel algorithm, CEPS. The sparsity constraint facilitates the use of one-bit compressive sensing to transmit one-bit information between partially selected neighbour nodes at specific steps, thereby significantly improving communication efficiency. Moreover, we integrate differential privacy into the algorithm to ensure privacy preservation and bolster the trustworthiness of the learning process. Furthermore, CEPS is underpinned by theoretical guarantees regarding both convergence and privacy. Numerical experiments validate the effectiveness of the proposed algorithm in improving communication and computation efficiency while maintaining a high level of trustworthiness.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Leiden-Fusion Partitioning Method for Effective Distributed Training of Graph Embeddings [0.7373617024876725]
We introduce a novel partitioning method, named Leiden-Fusion, designed for large-scale training of graphs with minimal communication.
Our method guarantees that, for an initially connected graph, each partition is a densely connected subgraph with no isolated nodes.
We demonstrate the effectiveness of our method through extensive evaluations on several benchmark datasets.
arXiv Detail & Related papers (2024-09-15T22:50:57Z) - When Swarm Learning meets energy series data: A decentralized collaborative learning design based on blockchain [10.099134773737939]
Machine learning models offer the capability to forecast future energy production or consumption.
However, legal and policy constraints within specific energy sectors present technical hurdles in utilizing data from diverse sources.
We propose adopting a Swarm Learning scheme, which replaces the centralized server with a blockchain-based distributed network.
arXiv Detail & Related papers (2024-06-07T08:42:26Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Low-Cost Privacy-Preserving Decentralized Learning [5.295018540083454]
Decentralized learning (DL) enables nodes in a network to train models collectively without sharing their raw data or relying on a central server.
This paper introduces Zip-DL, a privacy-aware DL algorithm that leverages correlated noise to achieve robust privacy against local adversaries.
arXiv Detail & Related papers (2024-03-18T13:53:17Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Asynchronous Local Computations in Distributed Bayesian Learning [8.516532665507835]
We propose gossip-based communication to leverage fast computations and reduce communication overhead simultaneously.
We observe faster initial convergence and improved performance accuracy, especially in the low data range.
We achieve on average 78% and over 90% classification accuracy respectively on the Gamma Telescope and mHealth data sets from the UCI ML repository.
arXiv Detail & Related papers (2023-11-06T20:11:41Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Differentially Private Decentralized Optimization with Relay Communication [1.2695958417031445]
We introduce a new measure: Privacy Leakage Frequency (PLF), which reveals the relationship between communication and privacy leakage of algorithms.
A novel differentially private decentralized primal--dual algorithm named DP-RECAL is proposed to take advantage of operator splitting method and relay communication mechanism to experience less PLF.
arXiv Detail & Related papers (2022-12-21T09:05:36Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - A Newton-type algorithm for federated learning based on incremental
Hessian eigenvector sharing [5.404315085380945]
We present an original communication-constrained Newton-type (NT) algorithm designed to accelerate Federated Learning (FL)
The proposed solution is thoroughly validated on real datasets.
arXiv Detail & Related papers (2022-02-11T17:52:56Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - A Low Complexity Decentralized Neural Net with Centralized Equivalence
using Layer-wise Learning [49.15799302636519]
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers)
In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns.
We show that it is possible to achieve equivalent learning performance as if the data is available in a single place.
arXiv Detail & Related papers (2020-09-29T13:08:12Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z) - Communication-Efficient Decentralized Learning with Sparsification and
Adaptive Peer Selection [13.963329236804586]
We introduce a novel decentralized training algorithm with the following key features.
Each worker only needs to communicate with a single peer at each communication round with a highly compressed model.
Experimental results show that our algorithm significantly reduces the communication traffic and generally selects relatively high bandwidth peers.
arXiv Detail & Related papers (2020-02-22T12:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.