Chisme: Fully Decentralized Differentiated Deep Learning for Edge Intelligence
- URL: http://arxiv.org/abs/2505.09854v1
- Date: Wed, 14 May 2025 23:29:09 GMT
- Title: Chisme: Fully Decentralized Differentiated Deep Learning for Edge Intelligence
- Authors: Harikrishna Kuttivelil, Katia Obraczka,
- Abstract summary: This paper introduces Chisme, a novel suite of protocols designed to address the challenges of implementing robust intelligence in the network edge.<n>Chisme includes both synchronous DFL (Chisme-DFL) and asynchronous GL (Chisme-GL) variants that enable collaborative yet decentralized model training.<n>We demonstrate that Chisme methods outperform their standard counterparts in model training over distributed and heterogeneous data in network scenarios.
- Score: 1.312141043398756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As demand for intelligent services rises and edge devices become more capable, distributed learning at the network edge has emerged as a key enabling technology. While existing paradigms like federated learning (FL) and decentralized FL (DFL) enable privacy-preserving distributed learning in many scenarios, they face potential challenges in connectivity and synchronization imposed by resource-constrained and infrastructure-less environments. While more robust, gossip learning (GL) algorithms have generally been designed for homogeneous data distributions and may not suit all contexts. This paper introduces Chisme, a novel suite of protocols designed to address the challenges of implementing robust intelligence in the network edge, characterized by heterogeneous data distributions, episodic connectivity, and lack of infrastructure. Chisme includes both synchronous DFL (Chisme-DFL) and asynchronous GL (Chisme-GL) variants that enable collaborative yet decentralized model training that considers underlying data heterogeneity. We introduce a data similarity heuristic that allows agents to opportunistically infer affinity with each other using the existing communication of model updates in decentralized FL and GL. We leverage the heuristic to extend DFL's model aggregation and GL's model merge mechanisms for better personalized training while maintaining collaboration. While Chisme-DFL is a synchronous decentralized approach whose resource utilization scales linearly with network size, Chisme-GL is fully asynchronous and has a lower, constant resource requirement independent of network size. We demonstrate that Chisme methods outperform their standard counterparts in model training over distributed and heterogeneous data in network scenarios ranging from less connected and reliable networks to fully connected and lossless networks.
Related papers
- Performance Analysis of Decentralized Federated Learning Deployments [1.7249361224827533]
Decentralized Federated Learning (DFL) is introduced to address these challenges.<n>It facilitates direct collaboration among participating devices without relying on a central server.<n>This work explores crucial factors influencing the convergence and generalization capacity of DFL models.
arXiv Detail & Related papers (2025-03-14T19:37:13Z) - DRACO: Decentralized Asynchronous Federated Learning over Row-Stochastic Wireless Networks [7.389425875982468]
We propose DRACO, a novel method for decentralized asynchronous Descent (SGD) over row-stochastic gossip wireless networks.<n>Our approach enables edge devices within decentralized networks to perform local training and model exchanging along a continuous timeline.<n>Our numerical experiments corroborate the efficacy of the proposed technique.
arXiv Detail & Related papers (2024-06-19T13:17:28Z) - When Swarm Learning meets energy series data: A decentralized collaborative learning design based on blockchain [10.099134773737939]
Machine learning models offer the capability to forecast future energy production or consumption.
However, legal and policy constraints within specific energy sectors present technical hurdles in utilizing data from diverse sources.
We propose adopting a Swarm Learning scheme, which replaces the centralized server with a blockchain-based distributed network.
arXiv Detail & Related papers (2024-06-07T08:42:26Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices [12.513477328344255]
Federated learning (FL) is a technique for distributed machine learning (ML)
In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation.
We develop a novel methodology for fully decentralized FL, where devices conduct model aggregation via cooperative consensus formation.
arXiv Detail & Related papers (2022-11-23T00:04:05Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.