Linear Regression over Networks with Communication Guarantees
- URL: http://arxiv.org/abs/2103.04140v1
- Date: Sat, 6 Mar 2021 15:28:21 GMT
- Title: Linear Regression over Networks with Communication Guarantees
- Authors: Konstantinos Gatsis
- Abstract summary: In connected autonomous systems, data transfer takes place over communication networks with often limited resources.
This paper examines algorithms for communication-efficient learning for linear regression tasks by exploiting the informativeness of the data.
- Score: 1.4271989597349055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key functionality of emerging connected autonomous systems such as smart
cities, smart transportation systems, and the industrial Internet-of-Things, is
the ability to process and learn from data collected at different physical
locations. This is increasingly attracting attention under the terms of
distributed learning and federated learning. However, in connected autonomous
systems, data transfer takes place over communication networks with often
limited resources. This paper examines algorithms for communication-efficient
learning for linear regression tasks by exploiting the informativeness of the
data. The developed algorithms enable a tradeoff between communication and
learning with theoretical performance guarantees and efficient practical
implementations.
Related papers
- An Efficient Federated Learning Framework for Training Semantic
Communication System [29.593406320684448]
Most semantic communication systems are built upon advanced deep learning models.
Due to privacy and security concerns, the transmission of data is restricted.
We introduce a mechanism to aggregate the global model from clients, called FedLol.
arXiv Detail & Related papers (2023-10-20T02:45:20Z) - Federated Reinforcement Learning at the Edge [1.4271989597349055]
Modern cyber-physical architectures use data collected from systems at different physical locations to learn appropriate behaviors and adapt to uncertain environments.
This paper considers a setup where multiple agents need to communicate efficiently in order to jointly solve a reinforcement learning problem over time-series data collected in a distributed manner.
An algorithm for achieving communication efficiency is proposed, supported with theoretical guarantees, practical implementations, and numerical evaluations.
arXiv Detail & Related papers (2021-12-11T03:28:59Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Adaptive Scheduling for Machine Learning Tasks over Networks [1.4271989597349055]
This paper examines algorithms for efficiently allocating resources to linear regression tasks by exploiting the informativeness of the data.
The algorithms developed enable adaptive scheduling of learning tasks with reliable performance guarantees.
arXiv Detail & Related papers (2021-01-25T10:59:00Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.