A communication efficient distributed learning framework for smart
environments
- URL: http://arxiv.org/abs/2109.13049v1
- Date: Mon, 27 Sep 2021 13:44:34 GMT
- Title: A communication efficient distributed learning framework for smart
environments
- Authors: Lorenzo Valerio, Andrea Passarella, Marco Conti
- Abstract summary: This paper proposes a distributed learning framework to move data analytics closer to where data is generated.
Using distributed machine learning techniques, it is possible to drastically reduce the network overhead, while obtaining performance comparable to the cloud solution.
The analysis also shows when each distributed learning approach is preferable, based on the specific distribution of the data on the nodes.
- Score: 0.4898659895355355
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the pervasive diffusion of personal mobile and IoT devices, many
``smart environments'' (e.g., smart cities and smart factories) will be, among
others, generators of huge amounts of data. Currently, this is typically
achieved through centralised cloud-based data analytics services. However,
according to many studies, this approach may present significant issues from
the standpoint of data ownership, and even wireless network capacity. One
possibility to cope with these shortcomings is to move data analytics closer to
where data is generated. In this paper, we tackle this issue by proposing and
analyzing a distributed learning framework, whereby data analytics are
performed at the edge of the network, i.e., on locations very close to where
data is generated. Specifically, in our framework, partial data analytics are
performed directly on the nodes that generate the data, or on nodes close by
(e.g., some of the data generators can take this role on behalf of subsets of
other nodes nearby). Then, nodes exchange partial models and refine them
accordingly. Our framework is general enough to host different analytics
services. In the specific case analysed in the paper, we focus on a learning
task, considering two distributed learning algorithms. Using an activity
recognition and a pattern recognition task, both on reference datasets, we
compare the two learning algorithms between each other and with a central cloud
solution (i.e., one that has access to the complete datasets). Our results show
that using distributed machine learning techniques, it is possible to
drastically reduce the network overhead, while obtaining performance comparable
to the cloud solution in terms of learning accuracy. The analysis also shows
when each distributed learning approach is preferable, based on the specific
distribution of the data on the nodes.
Related papers
- Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Energy efficient distributed analytics at the edge of the network for
IoT environments [0.4898659895355355]
We exploit the fog computing paradigm to move close to where data is produced.
We analyse the performance of different configurations of the distributed learning framework.
arXiv Detail & Related papers (2021-09-23T14:07:33Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information [55.866673486753115]
We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
arXiv Detail & Related papers (2020-12-15T20:41:24Z) - A decentralized aggregation mechanism for training deep learning models
using smart contract system for bank loan prediction [0.1933681537640272]
We present a solution to benefit from a distributed data setup in the case of training deep learning architectures by making use of a smart contract system.
We propose a mechanism that aggregates together the intermediate representations obtained from local ANN models over a blockchain.
The obtained performance, which is better than that of individual nodes, is at par with that of a centralized data setup.
arXiv Detail & Related papers (2020-11-22T10:47:45Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.