Exploring the Impact of Disrupted Peer-to-Peer Communications on Fully
Decentralized Learning in Disaster Scenarios
- URL: http://arxiv.org/abs/2310.02986v1
- Date: Wed, 4 Oct 2023 17:24:38 GMT
- Title: Exploring the Impact of Disrupted Peer-to-Peer Communications on Fully
Decentralized Learning in Disaster Scenarios
- Authors: Luigi Palmieri, Chiara Boldrini, Lorenzo Valerio, Andrea Passarella,
Marco Conti
- Abstract summary: Fully decentralized learning enables the distribution of learning resources across multiple user devices or nodes.
This study investigates the effects of various disruptions to peer-to-peer communications on decentralized learning in a disaster setting.
- Score: 4.618221836001186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully decentralized learning enables the distribution of learning resources
and decision-making capabilities across multiple user devices or nodes, and is
rapidly gaining popularity due to its privacy-preserving and decentralized
nature. Importantly, this crowdsourcing of the learning process allows the
system to continue functioning even if some nodes are affected or disconnected.
In a disaster scenario, communication infrastructure and centralized systems
may be disrupted or completely unavailable, hindering the possibility of
carrying out standard centralized learning tasks in these settings. Thus, fully
decentralized learning can help in this case. However, transitioning from
centralized to peer-to-peer communications introduces a dependency between the
learning process and the topology of the communication graph among nodes. In a
disaster scenario, even peer-to-peer communications are susceptible to abrupt
changes, such as devices running out of battery or getting disconnected from
others due to their position. In this study, we investigate the effects of
various disruptions to peer-to-peer communications on decentralized learning in
a disaster setting. We examine the resilience of a decentralized learning
process when a subset of devices drop from the process abruptly. To this end,
we analyze the difference between losing devices holding data, i.e., potential
knowledge, vs. devices contributing only to the graph connectivity, i.e., with
no data. Our findings on a Barabasi-Albert graph topology, where training data
is distributed across nodes in an IID fashion, indicate that the accuracy of
the learning process is more affected by a loss of connectivity than by a loss
of data. Nevertheless, the network remains relatively robust, and the learning
process can achieve a good level of accuracy.
Related papers
- Scrutinizing the Vulnerability of Decentralized Learning to Membership Inference Attacks [1.5993362488149794]
We study the vulnerability to Membership Inference Attacks -- MIA -- in decentralized learning systems.
Our key finding is that the vulnerability to MIA is heavily correlated to the local model mixing strategy performed by each node.
Our paper draws a set of lessons learned for devising decentralized learning systems that reduce by design the vulnerability to MIA.
arXiv Detail & Related papers (2024-12-17T12:02:47Z) - Robustness of Decentralised Learning to Nodes and Data Disruption [4.062458976723649]
We study the effect of nodes' disruption on the collective learning process.
Our results show that decentralised learning processes are remarkably robust to network disruption.
arXiv Detail & Related papers (2024-05-03T12:14:48Z) - Impact of network topology on the performance of Decentralized Federated
Learning [4.618221836001186]
Decentralized machine learning is gaining momentum, addressing infrastructure challenges and privacy concerns.
This study investigates the interplay between network structure and learning performance using three network topologies and six data distribution methods.
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
arXiv Detail & Related papers (2024-02-28T11:13:53Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.