Optimising cost vs accuracy of decentralised analytics in fog computing
environments
- URL: http://arxiv.org/abs/2012.05266v1
- Date: Wed, 9 Dec 2020 19:05:44 GMT
- Title: Optimising cost vs accuracy of decentralised analytics in fog computing
environments
- Authors: Lorenzo Valerio, Andrea Passarella, Marco Conti
- Abstract summary: Data gravity, a fundamental concept in Fog Computing, points towards decentralisation of computation for data analysis.
We propose an analytical framework able to find the optimal operating point in this continuum.
We show through simulations that the model accurately predicts the optimal trade-off, quite often an emphintermediate point between full centralisation and full decentralisation.
- Score: 0.4898659895355355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The exponential growth of devices and data at the edges of the Internet is
rising scalability and privacy concerns on approaches based exclusively on
remote cloud platforms. Data gravity, a fundamental concept in Fog Computing,
points towards decentralisation of computation for data analysis, as a viable
alternative to address those concerns. Decentralising AI tasks on several
cooperative devices means identifying the optimal set of locations or
Collection Points (CP for short) to use, in the continuum between full
centralisation (i.e., all data on a single device) and full decentralisation
(i.e., data on source locations). We propose an analytical framework able to
find the optimal operating point in this continuum, linking the accuracy of the
learning task with the corresponding \emph{network} and \emph{computational}
cost for moving data and running the distributed training at the CPs. We show
through simulations that the model accurately predicts the optimal trade-off,
quite often an \emph{intermediate} point between full centralisation and full
decentralisation, showing also a significant cost saving w.r.t. both of them.
Finally, the analytical model admits closed-form or numeric solutions, making
it not only a performance evaluation instrument but also a design tool to
configure a given distributed learning task optimally before its deployment.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - A Federated Distributionally Robust Support Vector Machine with Mixture of Wasserstein Balls Ambiguity Set for Distributed Fault Diagnosis [3.662364375995991]
We study the problem of training a distributionally robust (DR) support vector machine (SVM) in a federated fashion over a network comprised of a central server and $G$ clients without sharing data.
We propose two distributed optimization algorithms for training the global FDR-SVM.
arXiv Detail & Related papers (2024-10-04T19:21:45Z) - Delegating Data Collection in Decentralized Machine Learning [67.0537668772372]
Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.
We design optimal and near-optimal contracts that deal with two fundamental information asymmetries.
We show that a principal can cope with such asymmetry via simple linear contracts that achieve 1-1/e fraction of the optimal utility.
arXiv Detail & Related papers (2023-09-04T22:16:35Z) - Communication-Efficient Distributionally Robust Decentralized Learning [23.612400109629544]
Decentralized learning algorithms empower interconnected edge devices to share data and computational resources.
We propose a single decentralized loop descent/ascent algorithm (ADGDA) to solve the underlying minimax optimization problem.
arXiv Detail & Related papers (2022-05-31T09:00:37Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Cost-Effective Federated Learning in Mobile Edge Networks [37.16466118235272]
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model without sharing their raw data.
We analyze how to design adaptive FL in mobile edge networks that optimally chooses essential control variables to minimize the total cost.
We develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters.
arXiv Detail & Related papers (2021-09-12T03:02:24Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Privacy Amplification by Decentralization [0.0]
We introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized protocols.
We study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it.
We prove that the privacy-utility trade-offs of our algorithms significantly improve upon LDP, and in some cases even match what can be achieved with methods based on trusted/secure aggregation and shuffling.
arXiv Detail & Related papers (2020-12-09T21:33:33Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.