The effect of network topologies on fully decentralized learning: a
preliminary investigation
- URL: http://arxiv.org/abs/2307.15947v1
- Date: Sat, 29 Jul 2023 09:39:17 GMT
- Title: The effect of network topologies on fully decentralized learning: a
preliminary investigation
- Authors: Luigi Palmieri, Lorenzo Valerio, Chiara Boldrini and Andrea Passarella
- Abstract summary: In a decentralized machine learning system, data is partitioned among multiple devices or nodes, each of which trains a local model using its own data.
We investigate how different types of topologies impact the "spreading of knowledge"
Specifically, we highlight the different roles in this process of more or less connected nodes (hubs and leaves)
- Score: 2.9592782993171918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a decentralized machine learning system, data is typically partitioned
among multiple devices or nodes, each of which trains a local model using its
own data. These local models are then shared and combined to create a global
model that can make accurate predictions on new data. In this paper, we start
exploring the role of the network topology connecting nodes on the performance
of a Machine Learning model trained through direct collaboration between nodes.
We investigate how different types of topologies impact the "spreading of
knowledge", i.e., the ability of nodes to incorporate in their local model the
knowledge derived by learning patterns in data available in other nodes across
the networks. Specifically, we highlight the different roles in this process of
more or less connected nodes (hubs and leaves), as well as that of macroscopic
network properties (primarily, degree distribution and modularity). Among
others, we show that, while it is known that even weak connectivity among
network components is sufficient for information spread, it may not be
sufficient for knowledge spread. More intuitively, we also find that hubs have
a more significant role than leaves in spreading knowledge, although this
manifests itself not only for heavy-tailed distributions but also when "hubs"
have only moderately more connections than leaves. Finally, we show that
tightly knit communities severely hinder knowledge spread.
Related papers
- Impact of network topology on the performance of Decentralized Federated
Learning [4.618221836001186]
Decentralized machine learning is gaining momentum, addressing infrastructure challenges and privacy concerns.
This study investigates the interplay between network structure and learning performance using three network topologies and six data distribution methods.
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
arXiv Detail & Related papers (2024-02-28T11:13:53Z) - Unsupervised Learning via Network-Aware Embeddings [0.0]
We show how to create network-aware embeddings by estimating the network distance between numeric node attributes.
Our method is fully open source and data and code are available to reproduce all results in the paper.
arXiv Detail & Related papers (2023-09-19T08:17:48Z) - End-to-End Learning on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our results indicate that end-to-end multimodal learning from any arbitrary knowledge graph is indeed possible.
arXiv Detail & Related papers (2023-09-03T13:16:18Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - The interplay between ranking and communities in networks [0.0]
We present a generative model based on an interplay between community and hierarchical structures.
It assumes that each node has a preference in the interaction mechanism and nodes with the same preference are more likely to interact.
We demonstrate our method on synthetic and real-world data and compare performance with two standard approaches for community detection and ranking extraction.
arXiv Detail & Related papers (2021-12-23T16:10:28Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.