Collaborative Learning over Wireless Networks: An Introductory Overview
- URL: http://arxiv.org/abs/2112.05559v1
- Date: Tue, 7 Dec 2021 20:15:39 GMT
- Title: Collaborative Learning over Wireless Networks: An Introductory Overview
- Authors: Emre Ozfatura and Deniz Gunduz and H. Vincent Poor
- Abstract summary: We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
- Score: 84.09366153693361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this chapter, we will mainly focus on collaborative training across
wireless devices. Training a ML model is equivalent to solving an optimization
problem, and many distributed optimization algorithms have been developed over
the last decades. These distributed ML algorithms provide data locality; that
is, a joint model can be trained collaboratively while the data available at
each participating device remains local. This addresses, to some extend, the
privacy concern. They also provide computational scalability as they allow
exploiting computational resources distributed across many edge devices.
However, in practice, this does not directly lead to a linear gain in the
overall learning speed with the number of devices. This is partly due to the
communication bottleneck limiting the overall computation speed. Additionally,
wireless devices are highly heterogeneous in their computational capabilities,
and both their computation speed and communication rate can be highly
time-varying due to physical factors. Therefore, distributed learning
algorithms, particularly those to be implemented at the wireless network edge,
must be carefully designed taking into account the impact of time-varying
communication network as well as the heterogeneous and stochastic computation
capabilities of devices.
Related papers
- Computation Rate Maximization for Wireless Powered Edge Computing With Multi-User Cooperation [10.268239987867453]
This study considers a wireless-powered mobile edge computing system that includes a hybrid access point equipped with a computing unit and multiple Internet of Things (IoT) devices.
We propose a novel muti-user cooperation scheme to improve computation performance, where collaborative clusters are dynamically formed.
Specifically, we aims to maximize the weighted sum computation rate (WSCR) of all the IoT devices in the network.
arXiv Detail & Related papers (2024-01-22T05:22:19Z) - Asynchronous Local Computations in Distributed Bayesian Learning [8.516532665507835]
We propose gossip-based communication to leverage fast computations and reduce communication overhead simultaneously.
We observe faster initial convergence and improved performance accuracy, especially in the low data range.
We achieve on average 78% and over 90% classification accuracy respectively on the Gamma Telescope and mHealth data sets from the UCI ML repository.
arXiv Detail & Related papers (2023-11-06T20:11:41Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Federated Learning in Unreliable and Resource-Constrained Cellular
Wireless Networks [35.80470886180477]
We propose a federated learning algorithm that is suitable for cellular wireless networks.
We prove its convergence, and provide the optimal scheduling policy that maximizes the convergence rate.
arXiv Detail & Related papers (2020-12-09T16:16:43Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z) - Towards Efficient Scheduling of Federated Mobile Devices under
Computational and Statistical Heterogeneity [16.069182241512266]
This paper studies the implementation of distributed learning on mobile devices.
We use data as a tuning knob and propose two efficient-time algorithms to schedule different workloads.
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup-wise, 2-7% accuracy gain and convergence rate by more than 100% on CIFAR10.
arXiv Detail & Related papers (2020-05-25T18:21:51Z) - Straggler-aware Distributed Learning: Communication Computation Latency
Trade-off [56.08535873173518]
Straggling workers can be tolerated by assigning redundant computations and coding across data and computations.
In most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations.
Imposing such a limitation results in two main drawbacks; over-computation due to inaccurate prediction of the straggling behaviour, and under-utilization due to treating workers as straggler/non-straggler.
arXiv Detail & Related papers (2020-04-10T08:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.